Action research: Lessons learned from a multi-iteration...

25
IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003 105 Action Research: Lessons Learned From a Multi-Iteration Study of Computer-Mediated Communication in Groups —NED KOCK Abstract—Action research has been presented as a promising approach for academic inquiry because of its focus on real world problems and its ability to provide researchers with a rich body of field data for knowledge building. Published examples of action research, however, are hard to find in business communication literature. What are the reasons for this? In this paper, I try to provide a basis for answering this question as well as helping other business communication researchers—particularly those interested in computer-mediated communication issues—to decide whether and when to employ action research. I offer a first-person, confessional tale-like account of an action research study of computer-mediated communication in groups. In order to focus on the lessons learned, my focus in this paper is on the process of conducting action research and not on empirical results. Some of the situations and related lessons discussed here are somewhat surprising and illustrate the complex nature of action research. The doctoral research, conducted over four years in Brazil and New Zealand, highlights the challenges associated with action research’s dual goal of serving practitioners and the research community. Index Terms—Action research (AR), business process improvement, computer-mediated communications, email, grounded theory, information systems. Manuscript received December 7, 2002; revised February 10, 2003. The author is with the College of Business and Economics, Lehigh University, 621 Taylor St., Room 372, Bethlehem, PA 18015 USA (email: [email protected]). IEEE DOI 10.1109/TPC.2003.813164 Notwithstanding some controversy about its origins, action research (AR) seems to have been independently pioneered in the U.S. and Great Britain in the early 1940s. Kurt Lewin is generally regarded as one of its pioneers [1], [2] through his work on group dynamics in the US. Lewin was a German-born social psychologist with a strong experimental orientation. He migrated to the U.S. in 1933, after having served in the German army during World War I. Lewin initially settled at the State University of Iowa’s Child Welfare Research Station (from 1935 until about 1945), later moving to the Massachusetts Institute of Technology where he founded and directed, until his death in 1947, the Research Center for Group Dynamics. Lewin is widely known for his contributions to the understanding of complex societal issues, such as gang behavior and discrimination against minorities, as well as for his application of GESTALT psychology to the understanding of group dynamics [3], [4]. He is also believed to have been the first person to use the term “action research” [5]. Lewin [6] defined ACTION RESEARCH as a specific research approach in which the researcher generates new knowledge about a social system, while at the same time attempting to change it in a quasi-experimental fashion and 0361-1434/03$17.00 © 2003 IEEE

Transcript of Action research: Lessons learned from a multi-iteration...

Page 1: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003 105

Action Research: Lessons Learned From aMulti-Iteration Study of Computer-MediatedCommunication in Groups

—NED KOCK

Abstract—Action research has been presented as a promisingapproach for academic inquiry because of its focus on real worldproblems and its ability to provide researchers with a rich body offield data for knowledge building. Published examples of actionresearch, however, are hard to find in business communicationliterature. What are the reasons for this? In this paper, I try toprovide a basis for answering this question as well as helping otherbusiness communication researchers—particularly those interested incomputer-mediated communication issues—to decide whether andwhen to employ action research. I offer a first-person, confessionaltale-like account of an action research study of computer-mediatedcommunication in groups. In order to focus on the lessons learned, myfocus in this paper is on the process of conducting action researchand not on empirical results. Some of the situations and relatedlessons discussed here are somewhat surprising and illustrate thecomplex nature of action research. The doctoral research, conductedover four years in Brazil and New Zealand, highlights the challengesassociated with action research’s dual goal of serving practitionersand the research community.

Index Terms—Action research (AR), business process improvement,computer-mediated communications, email, grounded theory,information systems.

Manuscript received December 7, 2002;revised February 10, 2003.The author is with the College of Businessand Economics, Lehigh University,621 Taylor St., Room 372,Bethlehem, PA 18015 USA(email: [email protected]).IEEE DOI 10.1109/TPC.2003.813164

Notwithstanding somecontroversy about its origins,action research (AR) seems to havebeen independently pioneeredin the U.S. and Great Britainin the early 1940s. Kurt Lewinis generally regarded as one ofits pioneers [1], [2] through hiswork on group dynamics in theUS. Lewin was a German-bornsocial psychologist with a strongexperimental orientation. Hemigrated to the U.S. in 1933,after having served in the Germanarmy during World War I. Lewininitially settled at the StateUniversity of Iowa’s Child WelfareResearch Station (from 1935until about 1945), later movingto the Massachusetts Institute

of Technology where he foundedand directed, until his death in1947, the Research Center forGroup Dynamics. Lewin is widelyknown for his contributions to theunderstanding of complex societalissues, such as gang behavior anddiscrimination against minorities,as well as for his applicationof GESTALT psychology to theunderstanding of group dynamics[3], [4]. He is also believed to havebeen the first person to use theterm “action research” [5]. Lewin[6] defined ACTION RESEARCH asa specific research approach inwhich the researcher generatesnew knowledge about a socialsystem, while at the same timeattempting to change it in aquasi-experimental fashion and

0361-1434/03$17.00 © 2003 IEEE

Page 2: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

106 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

with the goal of improving thesocial system. Lewin’s approach toAR later became known as “classic”AR [7] and is, in a general sense,the approach on which my own ARpractice is more deeply rooted.

A distinctive thrust of AR alsodeveloped after World War II inGreat Britain at the TavistockInstitute of Human Relations inLondon. There, AR was used asan innovative method to deal withsociological and psychologicaldisorders arising from prisoncamps and war battlefields[8]–[10]. While having an impacton individuals, society, andorganizations that was comparableto Lewin’s, the researchers fromthe Tavistock school of AR wereless concerned with conducting ARin a quasi-experimental mannerthan with the solution of societaland organizational problemsthrough change-oriented research.This school of AR has primarilyaddressed intra-organizationaland worklife problems. One of themajor topics, for example, is theissue of job satisfaction and itsdependence upon several aspectsof work situations [8], [9], [11].The Tavistock school of AR hasbeen very influential within thesocial and organizational researchcommunities, and has led toseveral other forms of AR inquirythat can be seen as variants of thatschool, such as participatory andcritical AR (see, e.g., [7] and [12]).

The use of AR as an approachfor business inquiry with afocus on technology issues hasbeen important to the field ofinformation systems [12]–[21].However, surveys of researchapproaches spanning more thantwo decades suggest that thenumber of published examplesof AR has always been verysmall in comparison with case,experimental, and survey research[12], [22], [23]. Is this becauseAR is still new to businessresearch, or, rather, becausethere are inherent and uniquechallenges in conducting AR?My experience suggests that AR

presents inherent and uniquechallenges to researchers, which Itry to make explicit by offering acandid discussion of my doctoralAR on the effects of computermediation on business processimprovement groups in threeorganizations. The results of thatstudy have been published in [24]and [25]. In this paper, I presenta set of lessons learned in theformat of a “confessional tale”[26]–[28]. Although this paperdoes not aim to be an exemplarof that reporting method. Usually,CONFESSIONAL TALES are writtenin the first person and revealenough information about theresearcher and the research studyso readers can understand thesubtleties of the social contextin which the research wasconducted. Confessional tales arealso characterized by a level ofcandor not usually found in otherforms of the research reportingmethods. These characteristics ofconfessional tales are incorporatedinto this paper.

The main goal of this paperis to describe the subtletiesassociated with conducting AR forbusiness communication inquiryas well as some unexpectedexperience-based conclusions,summarized as lessons learned.Given the many forms of AR thatemerged from the two originalschools of AR pioneered by Lewinand the Tavistock group [7], [17], itwould be inappropriate to presentthe lessons learned discussed hereas being applicable to all formsof AR. In fact, I believe that thereis no such a thing as a “typical”AR study; each has its particularproblems and peculiarities. Rather,my main expectation is that mystory will illustrate the possibledifficulties of AR and potentialsolutions. In particular, I believethat this paper will be especiallyuseful to students using AR in theirdoctoral research investigation.I hope that the lessons learnedwill capture the essence of thenarrative and serve as points ofreference, rather than universalrules for conducting AR.

Two main themes underlie thenarrative presented here. The firsttheme is the personal appeal ofAR, an exciting research approachthat places the researcher in themiddle of the action. As such, ARalso allows the researcher accessto “rich” context-specific datathat would be difficult to collectthrough other, more traditional,research approaches. The secondtheme is the researcher’s struggleto reliably generate valid findingsfrom the analysis of a sea of data,which is often unstructured andladen with emotional attachments.

GENESIS OF THE RESEARCHPROJECT

Most research projects begin withthe identification of a researchtopic that appeals to both theresearcher and, ideally, thelarger research community,often by means of a survey ofpublished research and the gapstherein [29], [30]. In my case, mywork as a consultant involvedhelping companies set up qualitymanagement systems with the goalof obtaining ISO9000 certification[31]–[33]. I found this topic veryinteresting, and after exchangingseveral emails throughout 1992with my future advisor, whom Igot to know almost by chanceon the internet, I finally resignedfrom my job in Brazil and formallyenrolled in 1993 in the doctoralprogram in information systems ofthe School of Management Studiesat the University of Waikato,New Zealand.

While inspecting the literature,I noticed that most of theempirical computer-mediatedcommunication researchpublished in refereed journalsfocused on group decision supportsystems and was experimentalin nature [34]–[37]. Groupdecision support systems havebeen designed and traditionallyused to improve the efficiencyof face-to-face business decisionmeetings through system featuresthat automate the process ofanonymously contributing,

Page 3: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 107

ranking, and voting on ideas.Past research suggests thatthose systems, if properlyused, usually lead to businessmeeting productivity gains[38], [39]. My previous workexperience largely centered onfacilitating these business processimprovement (BPI) groups inBrazilian companies and helpingthem use information systemsto implement new businessprocesses. A BUSINESS PROCESS

is a set of interrelated activities,usually jointly carried out bypeople with different types ofexpertise. Examples of businessprocesses are filling an order for abatch of exhaust pipes or preparinga budget for the construction of athree-story building. The peopleinvolved in carrying out a processare often referred to as members ofa PROCESS TEAM [40].

My work as a consultant hadfueled my interest in a specificproblem facing the organizationsI had worked with. Most BPIgroups I had facilitated involvedpeople from different departmentswho discussed and tried to solveproblems related to a businessprocess whose componentactivities they had to routinelyperform as part of their job. Theproblem was that participation inBPI groups was very disruptivefor group members, particularlyif group discussions had to beconducted entirely face-to-face.While some of the attempts toconduct computer-mediated BPIgroups using email conferencingsystems in which I had beeninvolved had been relativelysuccessful, others failed miserably.More importantly, it was notclear what made some of thosecomputer-mediated BPI groupssucceed and others fail.

One of the difficulties I notedwas that group decision supportsystems traditionally require“synchronous” interaction, thatis, users must interact at the“same time,” and usually alsoin the same room. Thus, thesesystems could not entirely solve

the problem that participation inBPI groups could still be disruptivefor group members. One of themain obstacles to setting up BPIgroups is that people in differentdepartments have different workschedules and are often reluctantto work around those schedules totake part in face-to-face BPI groupdiscussions.

Also, a few influentialtheories of computer-mediatedcommunication suggestedoutcomes of the use ofasynchronous computer mediationin BPI groups that were obviouslycontradictory. Among the mostinfluential theories were socialpresence theory and mediarichness theory [41], [42]. Thosetheories essentially argued thatfor group tasks as complex (or“equivocal” in media richnesstheory terminology) as BPI,asynchronous computer mediationwould lead to less desirableoutcomes than those achievedby BPI groups interactingface-to-face. On the other hand,the social influence model arguedthat social influences couldstrongly shape individual behaviortoward technology, independentof technology traits [43], [44].My interpretation of the socialinfluence model in the context ofBPI suggested that certain socialinfluences (e.g., perceived groupmandate, peer expectations ofindividual behavior) could leadBPI members to adapt their useof technology in ways that wereinconsistent with predictionsbased on the social presence andmedia richness theories.

So, it seemed to my advisor andme that I had been able to identifya gap in the empirical researchliterature and a theoreticaldilemma that were both worthinvestigating and would hopefullyget me a doctoral degree. Iconcluded that the topic of myresearch should be the effects ofasynchronous groupware supporton business process improvementgroups. What I needed next was agood plan for my research project.

PLANNING THE RESEARCH:ITERATIONS OF THE AR CYCLE

My research plan was guided bytwo main project specifications.One of them was that theresearch should answer a broadquestion: What are the effectsof asynchronous groupwaresupport on business processimprovement groups? Given thatI spoke Portuguese and hadaccess to Brazilian organizations,the other specification was thatdata collection should take placepartly in Brazil and partly inNew Zealand. In this way, I hopedto identify and isolate the influenceof cultural idiosyncrasies on theresearch findings and increase theexternal validity of the research[45]–[47].

My review of research approachesand methodologies suggested thatthree main research approacheshad been successfully used inbusiness research addressingtechnology issues: experimental,survey, and case research [23],[48], [49]. At about the sametime, I got hold of a set of slidesfrom a recent presentation byJulie Travis (from the CurtinInstitute of Technology, Australia).The presentation was about anintriguing research approachcalled “action research.” Up untilthen, I had never heard about AR,which, at first glance, struck meas incorporating several elementsof what I thought to be goodconsulting. My subsequent libraryand internet research left mewith the impression that therewas disagreement among ARpractitioners about its precisedefinition [7], [50], [51]. However, Ialso found some clear distinctionsbetween AR and three commonresearch approaches (see Table I).

In my mind, conducting AR in abusiness context involved helpingone or more organizations become“better” (e.g., by improving theirproductivity, the quality of theirproducts and/or services, workingconditions, etc.) and, at thesame time, doing research (i.e.,collecting and analyzing research

Page 4: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

108 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

data). This combination of “action”and “research” [62], [63] was, andstill is, one of the most appealingaspects of the method. Havingdecided to employ AR, I plannedmy research as a set of a fewiterations of Susman and Evered’s[64] AR cycle (see Fig. 1); one to beconducted in Brazil, and the othersin New Zealand. The focus of my

investigation would be BPI groupssupported by internet-basedemail conferencing systems (one“mini-listserv” would be set up tomediate interaction between themembers of each BPI group).

Susman and Evered’s AR cyclecomprises five stages: diagnosing,action planning, action taking,

evaluating, and specifyinglearning [64]. (1) The DIAGNOSING

stage, where the cycle begins,involves the identification of animprovement opportunity or ageneral problem to be solved atthe client organization. (2) ACTION

PLANNING involves the considerationof alternative courses of action toattain the improvement or solve

TABLE ICONTRASTING AR WITH OTHER MAJOR RESEARCH APPROACHES

(ADAPTED FROM [60], [61])

Fig. 1. Susman and Evered’s AR cycle [64].

Page 5: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 109

the problem identified. (3) TheACTION TAKING stage involves theselection and implementationof one of the courses of actionconsidered in the previous stage.(4) The EVALUATING stage involvesthe study of the outcomes ofthe selected course of action. (5)Finally, the SPECIFYING LEARNING

stage involves reviewing theoutcomes of the evaluating stageand, based on this, knowledgebuilding in the form of a modeldescribing the situation understudy. In studies that involveseveral iterations of the AR cycle,the specifying learning stage isfollowed by the diagnosing stageof a subsequent cycle, which cantake place in the same context orin a different one (e.g., a differentdepartment or company).

Selecting Client Organiza-tions Research topic and clientorganization selection are closelyinterdependent tasks in AR. GivenAR’s dual goal, topic selection isdriven in part by organizationalneeds [7], [11]. The extent to whichclient organizations influence topicselection in AR varies. Some ARpractitioners have argued thattopic selection should be definedbased primarily on the needs of apotential client organization [65],[66]. Others seem to believe thattopic selection should result fromthe identification of a research gapbased on a survey of the relatedliterature [62]. The existence ofsuch divergent opinions underlieswhat Rapoport refers to as AR’s“initiative dilemma,” characterizedby the researcher having to choosebetween either defining a re-search topic beforehand and thensearching for suitable client orga-nizations, or approaching potentialclient organizations and defininga research topic based on specificneeds of those organizations [10].

The resolution of this dilemmain AR is likely to be a test of theresearcher’s ability to identifytopics that are relevant from botha research and an organizationalperspective [67]. It can reasonablybe expected that if a predefined AR

topic is not particularly relevantto organizations, then findinginterested client organizationsmay be very difficult. On theother hand, letting a potentialclient organization pick a researchtopic may have a negative impacton research relevance. Forexample, most organizations neednew computer-based databasesfrom time to time, yet standarddatabase development wouldhardly be an acceptable topic foran academic study, much lessa doctoral AR project focusingon technology-related issues.An advisable approach is todefine, in general terms, anexpected research contribution,and, subsequently, criteria forselecting client organizations thatare closely tied to the expectedresearch contribution of the ARproject [67].

In my research, the criteria forselecting client organizationsincluded commitment to BPIand initial absence of computersupport for BPI activities. Thefirst criterion, commitment toBPI, could be demonstrated bythe existence of at least oneformal organization-wide BPIprogram, such as a total qualitymanagement [68]–[72] or ISO9000certification program [31]–[33].The second criterion was aimed atallowing me to observe the impactof the use of computer systemsto support BPI groups the firsttime computers were used for thatpurpose in the organization.

BRAZILIAN PHASE OF THESTUDY: INITIAL ITERATION OFTHE AR CYCLE

I approached several organizationsover a three-month period with aplan to facilitate BPI groups withthe support of an asynchronousgroupware system, which Iproposed to develop as an emailconferencing system based oncommercial, off-the-shelf emailpackages. Initially, I focused myefforts on the city in which I lived,as I expected to spend a greatdeal of time in the field, collecting

research data and performingactivities related to the actioncomponent of my research. Thecity was an industrial centerin Southern Brazil and had apopulation of about 1.5 million.One organization (EventsInc, apseudonym) whose revenues camechiefly from the organization oflarge professional and trade events(e.g., exhibitions and conferences),agreed to participate in the ARproject.

In order to provide the readerwith an illustration of how eachstage of the AR cycle relates tothe other stages, this section isorganized around the five mainAR cycle stages described above.Later, when I describe the NewZealand phase of the study, a lessstructured narrative approachwill be used to highlight specificidiosyncrasies of AR.

Diagnosing: The Prospect of“Killing Two Birds With OneStone” EventsInc was facingtwo nagging problems that itsmanagement believed could besolved through the AR project.One of the problems was that itslocal area network of computerswas not working properly, and waspreventing the full deploymentof an email package they hadpurchased a while ago. Theother problem was that theirexisting approach to BPI wasineffective. EventsInc’s approachto BPI involved employees beingroutinely called to participate instrategic decisions (i.e., whetherto sign a large contract withthe government or purchase therights to a yearly professionalconference) independently of theirposition, formal responsibilities,and hierarchical level in theorganization. The approachwas inspired on a participatorymanagement method advocated bySemler that became wildly popularin Brazil in the early 1990s[73]–[76]. Eventually, EventsInc’smanagement found out thatSemler’s recommendations led totwo undesirable consequences.The first was that employees often

Page 6: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

110 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

spent long hours making decisionsoutside their sphere of competenceand that had little to do with thebusiness processes they wereimmediately responsible for. Thesecond was that, even though itwas clear to employees that theywere not prepared to make certainstrategic decisions they were oftenoffended when their suggestionswere not implemented.

EventsInc’s management viewedmy proposed AR intervention aslikely to solve both problems. Theyalso saw the AR project as anopportunity to “get on the righttrack” with the BPI program andimprove some of the core processesof the organization, particularlythose related to the planning andscheduling of events. My work atEventsInc began in August 1993and lasted approximately one year.

Action Planning: Setting Up anOrganizational Structure forBPI BPI groups were expectedto tackle a number of problemswhose scope varied from localdepartments to the wholebusiness. From the outset, itbecame clear that my temporarystatus at EventsInc would beequivalent to that of a “director,”answering directly to the chiefexecutive officer. I was going tobe paid an hourly fee for myparticipation in the project andwas introduced to employees bythe chief executive officer as anorganizational consultant and, “bythe way,” also as a researcher.

The iteration of the AR cycle wasexpected to last approximatelyone year. It was agreed that theiteration would begin with anumber of training sessions inwhich I provided all employees withformal hands-on training on BPItechniques. Each BPI group wasexpected to have a self-appointedleader, who could be anyone inthe organization and who shouldselect and invite other employeesto participate in the BPI group.

Our plan specified that wheneverthe implementation of a BPI

proposal required the involvementof people outside the group(e.g., the purchase of expensiveequipment or changes in processesoutside the sphere of authorityof the group), the proposal wouldbe handed to a BPI committeeto be evaluated. This committeeincluded members of the boardof directors and me. Should theproposals be considered attainableand likely to generate a returnon investment, the BPI groupleader would be given formalauthorization to coordinate, onbehalf of the chief executiveofficer, the implementation of theproposals with the appropriatedepartments (e.g., equipmentpurchase with the purchasingdepartment, equipment set upwith the information systemsdepartment, etc.).

Action Taking: Facilitating BPIGroups Seven training sessionswere held over a three-weekperiod. These sessions, whichlasted one full day each, gave methe opportunity to get to knowmanagers and employees on amore personal basis and establishan initial rapport with them.

Much to my relief, the computernetwork problems were relativelyeasy to fix, and the emailconferencing system was installedwithout any major problems.The system allowed BPI groupsto post electronic messagesonto mailboxes created for eachgroup discussion. Reading andposting rights could be grantedto all employees or a small set ofusers (e.g., the group members).Twenty-six BPI groups wereconducted, of which 11 interactedonly face-to-face because the emailconferencing system was not yetavailable. Most of these groupslasted no more than 40 days.

Evaluating: Good News andBad News (Or the “ShockingTruth”) In an attempt to ensuredata triangulation, four maintypes of data were collected duringthe action taking stage: interviewnotes, participating observation

notes, archival data (e.g., internalmemos, forms, technical manuals,and internal publications), andelectronic postings by BPI groupmembers [77], [78].

I set out to analyze the qualitativedata collected using the three-stepcoding process proposed bygrounded theory methodology[79]–[82]. The first step, OPEN

CODING, involves the identificationof emerging categories (i.e.,variables) in the textual data. Thesecond step, AXIAL CODING, involvesthe identification of relationshipsbetween the variables identifiedby open coding. The third step,SELECTIVE CODING, involves thegrouping of interrelated variablesinto models (e.g., causal models).However, the closest I was able toget to a blueprint to perform thesethree steps in a “reliable way” (i.e.,in a way that could be replicatedby other researchers) was anearlier version of the excellent,encyclopedic book of qualitativeanalysis techniques by Miles andHuberman [83]. At that time,advice from more experiencedqualitative researchers was not toworry about coding reliability, asqualitative research was by its ownnature “subjective.” Eventually,I developed my own approach todata analysis—an adaptation ofgrounded theory described in moredetail below.

Nevertheless, while an in-depthanalysis was needed for the“specifying learning” stage, therewas a sense of urgency to analyzethe data for an initial report to thecompany. This led me to conductperception frequency analyzesof interviews and to triangulatethe results with participantobservation notes, electronicpostings, and other documents, asdiscussed by Miles and Huberman[83] and Yin [84]–[86]. In generalterms, the results of this analysissuggested that the project hadbeen very successful. Significantefficiency gains in local processesdue to the decentralization ofaccess to information, a majorsimplification of the organization’s

Page 7: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 111

departmental structure, and a 7%increase in revenues were the mainbottom-line results of the majorchanges brought about by BPIgroups addressing “core” businessprocesses (i.e., processes that cutacross several departments or theentire company). The BPI groupsaddressing “local” processes(i.e., those restricted to one ortwo departments only), on theother hand, made a number ofincremental improvements in thequality and productivity of local(mostly departmental) processes,and a general improvementin internal morale and in thequality of the relationship betweenmanagement and line staff.These results were met withenthusiasm by both managementand employees.

Given their enthusiasm about theresults, I expected EventsInc’smanagement to want competitorsto be as far away as possiblefrom the company’s premisesso they would not be able tocopy EventsInc’s new approachto BPI. Nevertheless, on severaloccasions, the chief executiveofficer invited the owners of acompeting company to see theintermediate results of the project.The visitors, who were introducedto me as “some friends” by thechief executive officer, usuallyasked me (repeatedly) questionsabout the impact of BPI groupson EventsInc’s bottom line (e.g.,sustained increases in sales,profitability, etc.).

Approximately nine months intothe project, I heard from oneirate executive that EventsIncwas undergoing the first stages ofan amicable acquisition processby a competitor—exactly the onewhose representatives had beenvisiting EventsInc and askingme questions. My AR projectwas discontinued. I was askedby EventsInc’s chief executiveofficer to conduct an analysis ofthe project and summarize it in abusiness report to be consideredby the acquiring company’s boardof directors in the assessment of

EventsInc’s market value. Thisincident taught me an importantlesson about AR, summarized asLesson 1 below.

Lesson 1: Intervention does notequate control in AR. While in ARthe researcher applies interventionin the environment being studied,he or she has very little controlover what actually happens.

Lesson 1 highlights the fact thatalthough applying intervention onthe environment being studiedmay give the researcher the falseimpression he or she is “in control”(somewhat like in a laboratoryexperiment), the researcher has infact very little control over whatactually happens and how. Aplausible conclusion based on thislack of control approach is that ARis a risky research approach thatshould be avoided, particularly bydoctoral students (who need tocomplete their research within aset period of time). However, thereare ways in which this lack ofcontrol can be dealt with. Perhapsthe most obvious is to plan theAR project in such a way so thatmore than one organization isinvolved, so that the researcher isnot completely dependent on onesingle group of people to completethe research. This approach wasadopted here, as will become clearas the narrative progresses.

As soon as the news about theacquisition became public, keyemployees left the companyin disgust. Conversations withmanagement and employeessuggested that the general feelingwas that the BPI project had beenused to add market value to thecompany and benefit the majorshareholders in a potential sellout.I was seen as an “evil consultant”by some of the key employeeswho left the company. Otherssaw me as a “not very perceptiveconsultant” (actually, “idiot fool”was the term used by one manager)who had been manipulated by thechief executive officer. In my ownjudgment, the latter perceptionwas more accurate, as it had not

been clear to me what was goingon until late in the project. I wrotethe report, left the company, andstarted my preparations to travelto New Zealand.

Specifying Learning: Lost in aSea of Data In this first iterationof the AR cycle, I began whatbecame a habit throughout myresearch—to write a paper forsubmission to a conference first,and, after revisions, to a refereedjournal, summarizing the mainfindings of the research iteration.While time-consuming, this provedto be a very useful habit, as itforced me to compile the resultsof the data analysis conductedduring each iteration, review theseresults against those of previousiterations, summarize them as partof a model, and draw implicationsfor research and practice. Anadditional benefit from this habitwas that I was able to know whatseveral researchers, who served asconference and journal reviewers,thought about my research.

Having just left the research site,I found myself overwhelmed notonly by the large body of data tobe analyzed but also by importantdecisions that I had to make inorder to be able to produce whatI saw as “relevant knowledge,”the main goal of the “specifyinglearning” stage of the AR cycle[64]. It is common in AR for theresearcher to become an agentof change, and, thus, be deeplyinvolved with the subjects and theenvironment being studied. In myexperience, this is most likely toinduce broad and unfocused datacollection. Every observable event,comment by a BPI group member,printed document, electronicposting, etc., became a data pointfor me. Also, since I had collecteda large number of relativelyunfocused data, key questionsemerged in connection with whatto address in the analysis of thedata. For example, should theuse of groupware-supported BPIby management as a means of(arguably unethically) adding valueto a soon-to-be-sold company be

Page 8: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

112 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

the main focus of my analysis, orshould the target of my analysisbe the impact of groupware on BPIgroups?

As with past research projects,what saved me from totalconfusion was the use of asystematic method, namely anadaptation of Glaser and Strauss’s[81] grounded theory methodologyto my particular situation (seethe Appendix). Also, I decided tostick with the original goal of theresearch, which was to investigatethe impact of asynchronousgroupware support on BPI groups.This led me to disregard theselling-out incident and focus onthe interaction between people andtechnology at the group level ofanalysis.

Central to grounded theorymethodology [79]–[82] is athree-phase iterative codingprocess, which starts with opencoding, moving then to axialcoding, and ending with selectivecoding. While this process iswell explained in the normativeliterature on ground theorymethodology, translating itinto practice was no easy task,requiring some creativity and aclear idea of how the research

findings were to be modeled.Otherwise, I must admit that Icould have easily confused openwith axial or with selective coding.After trying different alternativesto modeling research findings, Idecided to use traditional causalmodels [87], [88] as the one shownin Fig. 2.

Causal models are made up of fourtypes of variables: independent,intervening, moderating, anddependent variables [89]–[91].Open coding consisted inidentifying individual variables inthe causal model. Axial codingconsisted in identifying causallinks between pairs of variables.Selective coding consisted inidentifying dependent variablesand sets of interrelated variablesthat made up each causal model(which meant that several causalmodels were developed). Moredetails on this coding process areprovided in the Appendix.

It is not uncommon to find in theAR literature the recommendationto begin the research with a cleanslate, so as to allow the findingsto truly emerge from the data[66], [92]. This is also one ofthe main premises of groundedtheory methodology [80]. However,

the extent to which emergenceoccurs varies considerably as onemoves from open to axial and toselective coding. In open coding,where variables are identified, thedegree of emergence is apparentlyhigher than in axial coding,where a basic set of variablesalready exists and the researchersearches for cause-effect linksbetween variables. The degree ofemergence in axial coding is, inturn, apparently higher than inselective coding, where sets oflinked variables are grouped. Asa result, it is apparently easierto define criteria and guidelinesfor selective than for axial coding,and for axial than for open coding,that will lead to similar resultsif employed by different coders.Having said that, I also must saythat I had tremendous difficultygetting started with open codingand would have preferred to skipthis step altogether if I could. Itis possible to devise reliable [46]analysis procedures for selectivecoding and for axial coding, but thesame seems very difficult for opencoding, which apparently did notescape Strauss and Corbin’s [93]attention in the latest version oftheir grounded theory methodologybook. The key reason here seemsto be that, when using open coding

Fig. 2. Example of causal model. (Generated during the first iteration of the AR cycle. EC = Email conferencing.)

Page 9: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 113

with data collected through AR,the researcher tries to extractconstructs from a very large bodyof unfocused and unstructureddata. I put this to the test byasking two colleagues to analyze asubset of the data I had analyzedusing the adapted grounded theorycoding process described in theAppendix. While open coding ledto conclusions that were difficultto reconcile, when we startedwith the same initial constructs,axial and selective coding ledto very similar results (i.e.,effects, explanations, and causalmodels). Whenever it is difficultto devise coding procedures thatcan be replicated by others,it is also difficult to convinceothers (and even oneself) thatthe coding process has not beeninfluenced by subjective factors,such as personal preconceptionsand feelings toward particularindividuals. This leads me toLesson 2, which is summarizedbelow.

Lesson 2: Open coding is unlikelyto lead to reliable analysisresults in AR. While apparentlystraightforward, the groundedtheory methodology techniqueknown as open coding is unlikelyto lead to the same results whenemployed by different researcherson the same body of data.

As a result of Lesson 2, I decidedto eliminate the open coding stepin further iterations of the ARcycle by collecting data arounda set of predefined variables tiedto a set of research questions;this proved to be a wise decisionfrom a data analysis perspective.The problem of devising reliablecoding procedures was particularlyacute in this Brazilian phase ofthe AR project, regarding theopen coding step. That problemwas compounded by the factthat data collection had beendone in an unstructured and,most importantly, unfocused way.Thus, my conclusions were verytentative and accompanied byseveral caveats and limitations.Nevertheless, I summarized

this phase of the research in aconference article that eventuallyreceived the best conference paperaward at a large conference inAustralia. Frankly, I am unsureas to whether I really deservedthe award. I interpreted the awardprimarily as a sign of AR’s appealin the field of information systems(the focus of the conference), whereseldom do researchers bridgethe gap between them and thepractitioners they study [12], [19],[23].

NEW ZEALAND PHASE OF THESTUDY: SECOND, THIRD, ANDFOURTH ITERATIONS OF THEAR CYCLE

The city in New Zealand wheremy university was located had apopulation of about 300 000, andits economy revolved around theproduction of industrialized food,paper, and plastic products, aswell as genetic enhancement ofedible plants and animals. Gainingaccess to client organizations inNew Zealand was nowhere nearlyas easy as it had been in Brazil.I had to face a new reality; thiswas a country to which I hadrecently arrived and in which Ihad no business contacts. Mylevel of English skills and mydifficulty understanding the localaccent and idiom did not make thesituation any easier.

Most of the organizations Icontacted declined participation.Some organizations were willingto discuss the AR project butdemanded changes in the researchtopic to fit their specific needs.For instance, the plant managerof a manufacturer of plasticproducts showed some interestin the research project but wasskeptical about the usefulnessof an email conferencing systemas a new interaction medium forBPI groups. He was, nevertheless,interested in the workflow controlfeatures found in some commercialasynchronous groupware systems.Given this, he proposed that theresearch project be carried out athis plant, on one condition—that

the focus of the research was on thedevelopment of workflow controlapplications. The change wouldallow the company to tie the ARproject to an ongoing effort gearedat improving the productivityof production and inventorycontrol processes at the plant. Ianalyzed the situation carefullyand decided to decline the offer ontwo main grounds. First, most ofmy research work done up untilthen (literature review, generalresearch design, etc.) would be lost.Second, my interest as a businesscommunication researcher wasprimarily in people’s behaviortoward technology rather than intechnology development issues.

At the beginning of my doctoralresearch, I had discussed withmy advisor the possibility ofconducting one or more iterationsof the AR cycle at our ownuniversity. The rationale for thiswas that being an insider wouldallow me to interpret any patternsin the data very accurately andbetter understand similar patternsin other organizations. After aboutsix months, my inability to gainaccess to an appropriate externalsite provided the extra motivationthat I needed to put this ideainto practice. Coincidentally, ouruniversity had recently beguna university-wide BPI program,and one of its colleges was goingthrough the final stages of ISO9000certification. I approached thedean of that college and some ofthe members of his office aboutconducting BPI groups with emailconferencing support. The ideawas well received and led to mysecond iteration of the AR cycle,where a pilot BPI group completedits work with success under myfacilitation. Later, I conducted afourth iteration of the AR cycle (thethird iteration was conducted at adifferent organization), involvingfive BPI groups in the college(CollegeOrg, a pseudonym).

While conducting the seconditeration at CollegeOrg, I kepttrying to gain access to outsideorganizations, without much

Page 10: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

114 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

success. Toward the end ofthe second iteration, I met anofficial of a branch of the NewZealand Ministry of Agricultureand Fisheries (GovernOrg, apseudonym) during a chanceencounter and, sensing theopportunity, described my ARproject and invited him and hisorganization to participate. Heagreed to arrange a meetingwhere I could discuss the projectin more detail with him, one ofGovernOrg’s quality managers,and an information systems teamleader. The meeting went well, inpart due to the supportive remarksby my new friend, who servedas a business communicationsand public relations officer atGovernOrg and who had by thenassumed the role of championof the AR project. Two othermeetings with senior executivesfollowed this first preliminarymeeting, after which I was givenformal permission to conductpart of my research project atGovernOrg. A few months later, Icompleted a report summarizinginterviews with management andline employees at GovernOrgand proposing a more detailedproject plan. The plan was formallyapproved and allowed me toconduct my third iteration of ARcycle, beginning in September of1995. This experience highlightsan important lesson about gainingaccess to organizational fieldresearch sites, summarized inLesson 3 below, that has beenaptly put by Barley: “despite anacademic’s proclivity to thinkotherwise, who one knows is oftenfar more practical than what oneknows” [26, p. 228].

Lesson 3: Gaining site access inAR is a matter of knowing the rightpeople. While in AR the researchermay think that by offering a serviceto organizations and presentinghimself or herself as an expert ina particular area will make it easyto gain access to a site, that willnever happen without the “rightcontacts” and the support of the“right people.”

Structure of the Second, Third,and Fourth Iterations of theAR Cycle The second, third,and fourth iterations of the ARcycle, conducted in New Zealandat GovernOrg and CollegeOrg,had each the same stages as theprevious iteration: diagnosing,action planning, action taking,evaluating, and specifyinglearning. Each iteration led to thebuilding of explanatory causalmodels based on the analysisof the evidence gathered duringthe iteration. At the end of eachiteration, I compared its findingswith the findings of previousiterations. The comparisonhighlighted invariable patternsand discrepancies, which I triedto explain by revising existingcausal models and creating otherhigher-level causal models (or“meta”-models that explainedpatterns and discrepancies acrossiteration-specific causal models.

As mentioned before, in the seconditeration of the AR cycle, I ledand facilitated one BPI group atCollegeOrg. This allowed me torefine the BPI group methodologyand the asynchronous groupwaretool used by BPI groups in Brazil.At the end of that iteration, Iwrote a brief manual to helpguide the work of future BPIgroup members. Also, during thesecond iteration, I developed arefined asynchronous groupwaretool based on a commercialgroupware system used at bothCollegeOrg and GovernOrg, namelyNovell Groupwise (trademark ofNovell Corporation). During thethird and fourth iterations, Ifacilitated eleven BPI groups usingthe BPI group manual and theasynchronous groupware toolrefined in the second iteration. Sixof these groups were conductedat GovernOrg. The five remaininggroups were conducted atCollegeOrg.

Describing each stage of thethree iterations conducted inNew Zealand would be somewhatrepetitive and take a considerable

amount of space. Instead, I willfocus my attention in this sectionon other issues that are moregeneric and directly related toconducting AR. I will start byhighlighting differences betweenthis phase of the research and theprevious phase.

More Focused Approach forData Collection: Did it Affect“Emergence”? One of the keydifferences between this (in NewZealand) and the previous phase(in Brazil) was a more structuredapproach to the collection ofresearch data, which included theuse of semistructured interviewsaddressing specific variables(identified in the previous phaseof the research). I refer to theinterviews as semistructuredbecause even though they werebased on a predefined list ofquestions, they were IN-DEPTH

INTERVIEWS, as defined by Sommerand Sommer [63]. As such, eachquestion from the predefined set ofquestions led, once answered, toseveral other follow-up questions.Even though the follow-upquestions depended on eachrespondent’s answers, they werebased on simple guidelines, suchas probing further for “why” and“how.” Semistructured interviewslet researchers focus the datacollection on a set of predefinedvariables and, at the same time,allow them to identify othervariables that were not addressedby the questionnaire. These newvariables usually emerge fromthe analysis of answers to thefollow-up questions.

While not based on a rigorousempirical test, one strongperception remains. My decisionto use semistructured interviewsremoved some of the uncertaintyassociated with open codingbecause it focused data collection,at least initially, around certainvariables. It also limited theamount of evidence I collected,which in turn, facilitated dataanalysis. Given that in thefirst phase of the research (in

Page 11: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 115

Brazil) data collection was muchless structured and that keyvariables were identified then, itseemed reasonable to design thesemistructured interviews basedon those variables. This did notdo much to reduce emergencethough, as new variables emergedin each iteration (see Table II) frommy attempts to explain (see theAppendix) the effects observed.

The first iteration had begun witha pseudoresearch framework ofonly three variables. These werethe main independent variablesof my research, namely “emailconferencing support availability”and two variables that reflectedthe impact of technology on BPIat the organizational level, namely“organizational BPI efficiency”(or the “productivity” of BPI) and“organizational BPI effectiveness”(or the “quality” of BPI). Severalnew variables were added to these,in a particularly intense way in thesecond and third iterations. Thistaught me Lesson 4, summarizedbelow.

Lesson 4: Skipping open codingdoes not prevent constructemergence in AR. While a bitcounterintuitive, skipping opencoding in AR by collecting datain connection with predefinedconstructs does not prevent theresearcher from identifying newemerging constructs from the data,as long as not only quantitativedata is collected.

Only three new variables emergedin the fourth iteration, which

TABLE IINUMBER OF VARIABLES INEACH OF THE ITERATIONS

signaled that the criterionproposed by Ketchum and Tristto identify the final cycle of amulti-iteration AR study wassatisfied and that the fourthiteration could be the last [94].Ketchum and Trist saw thefrequency of the iterations of theAR cycle as likely to decreaseand eventually stop as the matchimproves between the researcher’sconception of what they refer to asthe sociotechnical system and theactual sociotechnical system beingstudied [94]. This match can beassessed based on the similaritybetween the models generated inthe specifying learning stage ofeach pair of successive iterationsof the AR cycle.

Being Part of the Action: Is ItAlways Fun? The researcher’sinvolved stance in AR hasundoubtedly a great appeal tomany, but it also has its downsides,as noted in the narrative of theBrazil phase of the researchproject. The researcher can easilyget entangled in factional fights forpower and control, organizationalpolitics, and personal animositiesbetween individual participants.While in New Zealand my perceivedstatus at both GovernOrg and,particularly, CollegeOrg was muchlower than at EventsInc, myinvolvement in the “action” wasjust as intense.

At GovernOrg, two seniorexecutives who reported directlyto the chief executive officer hadsanctioned the AR iteration tobe conducted in their divisions.It became clear as the researchprogressed that these two seniorexecutives had very differentpersonalities and managementstyles. Among the differences wasthat one adopted a very democraticand consultative managementstyle, whereas the other adopteda more autocratic and somewhatauthoritarian style. While thedemocratic manager rarely didso, the autocratic manageroften made key organizationaldecisions alone. The effect thatcomputer-supported BPI groups

had on the senior executiveswas equally distinct. After fourBPI groups had been conducted,involving employees from bothdivisions, a clear divergence ofperceptions could be observed.The democratic manager’s viewof computer-supported BPIgroups was very positive. Hebelieved that a national programshould be instituted so as touse computer-supported BPIgroups to improve businessprocess throughout the Ministry ofAgriculture and Fisheries (of whichGovernOrg was a branch). Theautocratic manager, on the otherhand, felt that computer-supportedBPI groups were a big waste oftime, as well as an obstacle to swiftsenior management decisions.

One interesting effect of computermediation on BPI groups wasthat, even though asynchronouselectronic contributions were inno way anonymous (contributorswere identified in the “sender” fieldof their electronic postings), manyparticipants admittedly expressedtheir opinions more freely thanthey would have in face-to-facemeetings. Even though computermediation had no effect on theactual organizational status ofthe participants, it did seem tomake it harder for a traditionallydominant member to take controlof the group. Dominant membersin face-to-face meetings areusually the ones higher in theorganizational hierarchy. Whilecomputer-supported BPI groupsallowed the democratic managerto learn more about what hissubordinates thought, it alsocreated situations where theautocratic manager heard (i.e.,read on electronic messages)things that he did not want to hearfrom outspoken employees. Hismisgivings were compounded bythe fact that I had performed asimple audit of the divisions runby each manager, democratic andautocratic, at the beginning ofthe research iteration to identifyopportunities for improvement.That audit unveiled the fact thatthe productivity (assessed by

Page 12: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

116 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

standard metrics such as revenuesper employee) in the democraticmanager’s division was higherthan in the autocratic manager’sdivision.

From the fifth BPI group on, theautocratic manager became openlyhostile toward me. Among otherthings, he openly questioned mycredentials, arguing that someoneelse with a better understanding ofthe New Zealand culture could doa better job and pointed at some ofmy English mistakes to highlightmy foreign origin and strengthenhis argument. Since my Ph.D. wason the line, it was relatively easyfor me to find enough reasonsto ignore these expressions ofhostility.

However, hostility turned intoa direct order to abandon theresearch site during one of myinterviews with a BPI groupleader. Unlike in EventsInc, Ihad not been given an office atGovernOrg. Therefore, I usuallyconducted my interviews eitherin the interviewee’s office or atthe local cafeteria. In the middleof one of these interviews, at oneof the tables in the cafeteria, theautocratic manager approachedme and said, screaming: “Youhave a very cushy lifestyle, huh?Every time I see you here, you’rein the cafeteria taking a break!What makes you believe thatyou can drag my people into thiskind of lifestyle too? We havework to do here! You know?”Several employees who were atthe cafeteria at the time lookedat us, while the person whom Iwas interviewing (a manager whoreported to the person screamingat me) noticeably paled. I explainedto the autocratic manager that Iusually conducted my interviewsat the cafeteria because I did nothave an office at GovernOrg. Hecontinued his public reprimand forwhat seemed to be a minute or soand eventually told me that bothmy interview and my research atGovernOrg were over. Five BPIgroups had been conducted at

that time, and I had intended tofacilitate another one soon.

A few days later, the autocraticmanager called me on the phoneand apologized for his actions. Inhis own words, “You made somemistakes, but did not deserve thatmuch.” I accepted his apologyand asked to facilitate one morecomputer-supported BPI group.He reluctantly agreed and wasobviously relieved when I assuredhim that the group would bethe last I would facilitate atGovernOrg. That signaled theend of my third iteration of theAR cycle, even though I wouldhave preferred to facilitate a fewmore BPI groups and collect moredata before leaving GovernOrg. Italso reinforced Lesson 1, whichstates that intervention does notequate control in AR. That is, eventhough it may appear otherwise,since the researcher is an agent ofchange, in AR the researcher hasvery little actual control over theenvironment being investigated.

While the incidents above mightbe seen as very interesting froma research perspective, hintingat strong technology effects onthe behavior of certain managers,I was not able to unequivocallylink technology causes (computersupport for BPI groups) with theeffects observed (the negativereactions from the autocraticmanager). The reason was theexistence of a key confoundingfactor—my initial audit and itseffect on the autocratic manager.As discussed previously, myinitial audit suggested a lowproductivity in the division runby the autocratic manager, whichmight have played a major rolein triggering his reactions. Hecertainly expressed discomfortand concern about that auditon several occasions, and evenordered another audit (a “realone,” in his words) from a largeindependent accounting firmwhose outcomes were very similarto mine. I could not ignore thissource of “noise” that prevented

me from making unequivocalconclusions in support of previousempirical research on the topic[95]. This highlights one importantaspect of AR: the researcher’s deepinvolvement often works againsthim, so to speak, as the existenceof confounding variables becomesvery clear. The clear existence ofconfounding variables preventsthe researcher from makingotherwise relatively conclusiveinterpretations of the researchfindings. This taught me Lesson 5,summarized below.

Lesson 5: In AR the researcher’sactions may strongly bias theresults. While in AR the researcheris primarily interested in theimpact of certain factors (e.g.,presence of a technology) onpeople, the researchers’ ownactions may have a much strongerimpact than the original factorsof interest on the subjects and,consequently, significantly biasthe results.

During my fourth iteration of theAR cycle at CollegeOrg, I tooka much more careful approach,trying not to step on anyone’stoes; however, for an agent ofchange, this is easier said thandone. One BPI group, for example,run by an information technologylaboratory consultant, put mein hot water with a CollegeOrgsenior administrator who had notbeen invited to be part of the BPIgroup. His division was cited (in acritical way) in electronic postingsexchanged by group members.Those postings found their wayto him, and he commented to afaculty member of my departmentthat it had been unethical ofme to facilitate the BPI groupwithout inviting him. In fact,it had been the self-appointedgroup leader who had decidednot to invite him. I restrictedmy involvement to technicalfacilitation to avoid making whatI believe is a basic yet commonmistake in AR investigations: toshepherd research subjects intotaking certain actions and then

Page 13: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 117

later claiming that other factors(e.g., computer support) influencedthat behavior. Fortunately, theincident was soon forgotten.

Another BPI group workingon the redesign of a supportunit involved one senior facultywho posted remarks that wereseen as offensive by severalmembers; this eventually led tothe group’s dissolution before anyprocess redesign suggestions hadbeen proposed. Some membersvowed never to get involved incomputer-supported BPI groupsin the future and blamed me forbringing up the idea in the firstplace. Later, in my interview withthe senior faculty, he explained hisbehavior

Sorry, but this wholecomputer- mediated thing …it was a dumb thing to do …people need to meetface-to-face! […] I was abit naughty, but I had alreadymade my decision that [theBPI group discussion] was notgoing to be effective, so I feltit was not going to be somuch of a loss anyway. So, Ibasically, quite deliberately,upped the stakes by usingphrases and language whichwere very exclusive, and quitecontroversial … It was my wayof saying: ‘You guys need toget a life, we need to move onbecause this is not going towork.’ It was the ultimate formof arrogance, if you want. Iwas playing a game.

At this point, it became clear to methat I had learned an importantlesson about AR, summarizedbelow as Lesson 6.

Lesson 6: Researchers whoemploy AR must have a “thickskin.” While AR appeals to manyresearchers because it puts them“in the middle of the action,” thiscan also lead to anxiety and angerif the researcher does not developa “thick skin” approach to dealingwith behavior from subjects thatappears to be less than grateful orpolite.

In spite of the incidents above,67% (8 out of 12; half atGovernOrg and half at CollegeOrg)of the computer-supported BPIgroups conducted succeededin producing process redesignproposals of which all or partof the recommendations wereimplemented with positivebusiness results. Their ownmembers, as well as managerswho had not been part of thegroups but who had opportunityto observe the impact that theoutcomes of the groups had ontheir areas, saw these groups assuccessful and beneficial to theirorganizations. This contrastedwith the widely quoted 30%success rate for traditional(i.e., conducted primarilyface-to-face) BPI groups reportedby Champy [96]. Moreover,computer support appeared tohave drastically reduced theorganizational cost of conductingBPI groups by eliminatingor reducing transportation,accommodation, disruption,and other costs associated withface-to-face BPI meetings.

Even though it may appearotherwise, I benefitedtremendously from theresearch and was gratified byits general positive impact onthe organizations. I learned agreat deal about GovernOrg’soperations and the intricacies ofCollegeOrg’s processes, and I mademany friends along the way. Eventhough not everyone was happywith the research and its results,the general sense that I wasdoing something to improve theorganizations and the lives of thosewho worked for them remainedstrong throughout iterations 2, 3,and 4 and was often reinforced bythe feedback I got from employees.

COMPARING AR WITH OTHERRESEARCH APPROACHES: DID IREALLY MAKE A WISE CHOICE?While AR rewards the researcherin many ways and may potentiallylead to findings that other research

approaches may not, it is notan efficient research approach[10], [97]. The researcher hasto spend a considerable amountof time providing services to itsclient in order to be able to collectresearch data. Once researchdata are collected, usually inthe form of large bodies of text,the analysis is very demandingand time-consuming [78]. Thisbecame particularly clear to me asI had the opportunity to comparemy progress with that of otherdoctoral students in my universitywho began their experimental,survey, or case research atabout the same time I began myAR: it appeared that mine wasconsiderably more labor intensive.Originally, my doctoral studentcolleagues perceived my researchas little more than consultingand, as such, as a smart choiceon my part. I had, in their eyes,received an “easy ticket” to mydoctorate, particularly because myindustry background would, intheir opinion, allow me to quicklyand easily collect all the researchdata that I needed. Later, whenthey were already writing up theirtheses while I was still collectingfield data, it appeared that I hadbecome a source of comfort forthem. They would first whineabout how hard they had beenworking on their doctorates andthen look at me and say somethinglike: “but at least thank God I amnot in your shoes.”

My general feeling at that time wasone of having been cheated, eventhough the decision to conductAR had been entirely mine andI thoroughly enjoyed what I wasdoing. One of the problems wasthe unforeseen amount of workrequired to appropriately serve twomasters with different and oftencontradictory needs—the academiccommunity (or at least themembers of my Ph.D. committee)and the client organizations[98]. This can be illustrated bya simple comparison. From myobservation of my colleagues, Iwould argue that Phillips andPugh’s [29] general chronology

Page 14: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

118 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

of “traditional” (i.e., positivistand noninterventionist) doctoralresearch is fairly accurate. To it,I added that of my own doctoralresearch in Fig. 3 for the sake ofcomparison.

In the British doctorate system,which is adopted in New Zealand,doctoral students are not requiredto take courses. As a result,some elect to take courses andsome, as in my case, do not (Iopted to only audition parts ofsome courses). Nevertheless, Iadded one year of courses to thechronology of traditional doctoralresearch proposed by Phillipsand Pugh [29] in Fig. 3 to makethe comparison more meaningfulfor those more familiar with theAmerican doctorate system. Inthe American system, courses areusually required—often as manyas 18 courses, or approximately2-1/2 years of coursework. Asimple inspection of Fig. 3 clearly

suggests that, if I had to takecourses (even if only during oneyear), I would have never beenable to complete my doctoralresearch in 4 years unless I hadperformed fewer iterations of theAR cycle. This taught me Lesson 7,summarized below.

Lesson 7: AR is not an “efficient”approach for research. Whileallowing the research accessto “rich” data, AR may requiresignificantly more time andeffort from the researcher thanother, more traditional researchapproaches.

Another difficulty inherent inAR that became clear from thecomparison with other moretraditional research approaches(summarized earlier in Lesson 1) isthe higher risk that data collectionwill be delayed or prevented byorganizational events outside theresearcher’s scope of control.

As implied by the sequentialnature of the stages depicted inFig. 3, a delay in data collectionin any of the iterations of theAR cycle, such as a temporary“freeze” on computer-supportedBPI groups,would have had aripple effect throughout the wholeproject.

Publishing AR: Not for theFaint-Hearted As mentionedearlier, I developed the habit ofwriting a paper for submissionto a conference first, and, afterrevisions, to a refereed journal,in the specifying learning stageof each iteration of the AR cycle.Each paper summarized themain findings of the iterationand compared them with thefindings of previous iterations.I believed that this approach, iffollowed systematically, had thepotential to place me in a verysolid position when the time forthe final defense of the thesis

Fig. 3. My doctoral AR compared with “traditional” doctoral research.

Page 15: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 119

arrived [99]. By exposing my ideas,theoretical interpretations, andempirical methods throughout theresearch, I ensured that they werereviewed and criticized by a widerange of (usually more seasoned)researchers.

The habit above led me toacquire some valuable experiencewith the peer-review process ofseveral conferences and journals,particularly in regard to ARpapers. Notwithstanding thefact that English was not myfirst language, my experiencewith the peer-review processsuggests that it is very difficultto publish AR, particularly in“top” North American journals,for a variety of reasons. Amongthe reasons is, undoubtedly, thedominance in North Americanacademic circles of otherbusiness research approaches,particularly case, experimental,and survey research [23], and adearth of published examples ofAR, particularly AR addressingtechnology issues [12]. Not onlydoes this pose difficulties for thosetrying to publish AR studies toidentify model papers on which tobase their own papers, but it alsomakes it difficult to find reviewersfamiliar with, and favorabletoward, AR. The latter difficultywas particularly acute in mychosen area of research—referredto, at the time, by a few relatednames such as “group supportsystems,” “groupware” and“computer-supported cooperativework.” The reason was becausethe vast majority of previousresearch in the area had beenexperimental [35]–[37], [100]. Thispractically ensured that at leastone of the reviewers (and, quiteoften, the senior and associateeditors) for any paper I submittedfor publication held (even ifsubconsciously) assumptionsabout research rigor that weregrounded on experimentalresearch. That reviewer usuallyprovided hints of his or herresearch orientation in the review,along with recommendations onhow my research approach could

be improved if more control wasapplied judiciously and focus wasdirected to a few variables. In otherwords, at least one reviewer sawAR as a form of poorly conductedexperimental field research. Inmy experience, the opinion of onesingle reviewer, especially if statedin strong and unequivocal terms,will more often than not seal thefate of a paper submission for ajournal whose acceptance rate is15% or less. The consequence isthat AR papers, as well as othersthat do not conform to the currentnorm, normally fall into the 85%that get rejected. This perpetuatesa vicious cycle, since reviewersfor reputable journals are oftenselected by editors based ontheir publication record in thosejournals and others of similarstature.

Other difficulties with publishingAR, however, are intrinsic to theresearch approach itself. Forexample, it is very difficult todescribe an AR project in detailwithin the confines of a typicaljournal article without goingbeyond the maximum lengthprescribed. Consequently, authorshave to limit their discussion tocertain aspects of the AR project,which often creates inferential gapsthat are picked up by reviewers.For instance, I once submitteda paper to a North Americanjournal that went beyond theprescribed number of pages andwas asked by the associate editorassigned to the paper to reduce itssize by (approximately) half. Theassociated editor asked me to focusmy revision on certain sections, afew of which, he or she believed,could be entirely eliminated. I thenrevised the paper based on thosecomments and resubmitted it. Thenext verdict from the reviewerswas “revise and resubmit” because“there was an interesting story tobe told” but “more details wereneeded” to fully assess how myconclusions followed from thenarrative. I added more detailsand resubmitted the paper. Theensuing verdict was “reject,”decided by the associate editor and

without letting the paper go to thereviewers because the paper wasnow “too long and descriptive.”

It is also very difficult to describethe chronological stages of anAR project in the way theyactually occur. The reason seemsto be that the resulting paperdoes not conform to the usual“theory-test-findings-conclusion”(or similar) structure of mostempirical journal articles. As such,extra costs are associated withreading the paper, particularly forreviewers accustomed to papersfollowing that more traditionalstructure. This is often reflectedin comments such as “the paperis written in a confused way,” “theideas in the paper do not flowlogically,” and “the structure of thepaper is awkward and confusing.”

The experience of writing up mydoctoral thesis, in addition towriting and submitting conferencepapers and journal articles,also taught me an importantlesson, summarized in Lesson 8.It is difficult to publish AR inconference proceedings andjournal articles. Currently, thebook (or long report) format isbetter suited for AR reporting thanthe traditional conference paper orjournal article format. The latterrequire a level of summarizationthat often is just not appropriatefor AR, often forcing the researcherto fall into some of the trapsdiscussed above. I like to compareconducting AR to making a legalcase in a court of law. In it, theresearcher presents a large bodyof (often-scattered) evidence inorder to support a thesis, whichcan be represented as a causalmodel, “beyond reasonable doubt.”This requires considerably moreelaboration than describing acausal model and discussing theresults of a test of the model.

Lesson 8: It is difficult to publishAR in conference proceedings andjournal articles. While appealing topractitioners, AR studies usuallyrequire a level of detail to beappropriately described that

Page 16: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

120 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

makes it difficult to report themin conference papers and journalarticles, and that makes booksand monographs presently betteroutlets for AR reporting.

Lesson 8 is not aimed atdiscouraging authors fromtrying to publish AR in conferenceproceedings and journal articles.Rather, it is a statement thatreflects the status quo that I hopeto help change. To change thestatus quo, it is important thateditors, senior editors, associatededitors, and reviewers of journalsrecognize the difficulties associatedwith reporting on AR. The editorialteams of a few prestigious journalsare already moving toward thatdirection, which is indicated by therecent publication of AR articles inthose journals. IEEE TRANSACTIONS

ON PROFESSIONAL COMMUNICATION isone such journal.

CONCLUSION

The narrative in this paper maygive the reader the impression thatAR is not an appropriate approachfor business communicationresearch. That is not, however,the main message of this paper.To be sure, conducting AR is arisky proposition, one that carriesa number of difficulties andpersonal costs. Yet, it also carriesrewards that may far outweigh itsdifficulties and costs. This bringsme to one of the main messages ofthis paper: AR is better tailored tocertain types of researchers. It isreasonable to argue, based on thenarrative, that AR is particularlywell-suited for researchers withprevious industry experience andwho want to do research related tothe solution of complex problemsin settings they are familiar with.In AR, the researcher must be ableto provide the client organization aservice (e.g., consulting or softwaredevelopment) that is seen asvaluable by the client organizationand that at the same time enableshim or her to collect enoughdata for building a theoreticalor normative model. This should

be done with the eight lessonsdiscussed earlier in mind.

Regarding the lessons learned, itis important to note that severalof them are applicable to otherresearch approaches. This shouldnot be surprising because ARshares a number of importantcharacteristics with other researchapproaches. For example, likeother research approaches, ARis aimed at generating newand valid knowledge through arigorous and methodical processof discovery. Like AR, casestudy research is often seenas inefficient, presents opencoding difficulties, and is difficultto find publication outlets for.Challenges in gaining site accessare not restricted to AR either:case, experimental, and surveyresearch share this characteristic.Also, quasi-experimental fieldexperiments may share with ARthe ability of the researcher’sactions to bias the results [101].Unique to AR is the attemptat positive intervention in theorganization, and therefore the“thick skin” and the understandingof the lack of control that thisrequires.

Another key message of thispaper is that the researcher’sbackground and personal interestsare as important as the goal ofthe research when it comes tochoosing AR over other researchapproaches, particularly whetherthe goal is to test or buildtheoretical models. For manyyears, especially during the 1970sand 1980s, there has been anepistemological debate betweenAR proponents and detractors [1].In that debate, AR has oftenbeen presented as opposed topositivism [51], [102] an argumentthat is hard to justify as AR andpositivism can hardly be placed inthe same conceptual category. ARis an approach, like experimentalresearch, not an epistemology, likepositivism or interpretivism [23],[103]–[105]. Thus, comparing ARwith positivism is equivalent to

comparing a “painting technique”(e.g., oil painting) with a “schoolof painting” (e.g., impressionism).Yet, accepting this argument leadsto an inevitable conclusion, whichis that there can be “positivistAR.” While I believe that ARcan be conducted in a positivistmanner, my experience suggeststhat AR is a research approachthat is particularly useful for thedevelopment of theoretical modelsand somewhat difficult to use inthe test of theoretical models.

Of course, a researcher maychoose to conduct an AR studyto test a theoretical model (ora set of hypotheses), but to doso successfully, the researcherneeds to address the problemsassociated with the lack ofcontrol and unfocused datacollection often associated withAR. If those problems are notaddressed, tests will not conformto well-established positivistmethodological standards. Inconsequence, it is likely thatreports generated based on thestudy will be questioned basedon methodological standards andmost likely denied publicationin “top-tier” journals, unlessthe reports are judged basedon standards that are differentfrom those used to evaluatetraditional positivist research.To this, it can be added that ARis not a very efficient theoreticalmodel-testing approach. That is,testing a theoretical model usingAR requires considerably moretime and effort from the researcherthan using, say, experimentalresearch.

The above problem can beaddressed by structuring ARprojects as quasi-experiments[101], [106], as originallyenvisioned by one of its forefathers,namely Kurt Lewin [6], [7]. Inthat sense, hypotheses couldbe tested by comparing datacollected before and after theresearcher’s intervention in eachAR cycle, with the possibility ofconducting only one iteration

Page 17: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 121

of the AR cycle. Nonparametrictechniques [107] could be usedfor quantitative analysis, andthe results triangulated with thequalitative data collected andcompiled during the AR study [77],[78].

In addition, instead of usinga strictly positivist approachto conduct AR, one coulduse a modified approach topositivist research that builds onPopper’s falsifiability criterion, asexemplified by Kock [108], [109].In that study, the researcherused AR to test a hypothesis notonly by looking for evidence thatsupported the hypothesis, butalso by looking for evidence thatsuggested the existence of anexception to the hypothesis (orevidence supporting the negativeversion of the original hypothesis),and showing that no such evidencecould be found. According toPopper’s modified positivist epis-temology, every hypothesis shouldbe clearly falsifiable, and absenceof contradictory evidence becomesa stronger corroboration of thehypothesis than the mere presenceof supporting evidence [108].Since in AR the researcher is aninsider as opposed to a removedobserver, and thus has access toa broader body of evidence thanin other research approaches,AR seems to hold great promisewhen employed in conformitywith Popper’s modified positivistepistemology.

Recently, there has been muchdiscussion about the role ofrelevance in business research,particularly research addressingtechnology-related issues and itsrelationship with rigor [110]. It hasbeen argued that rigorous researchcan often be irrelevant [111]and that much of the relevantresearch currently conductedends up not being publishedin academic outlets because itdoes not conform to traditionalstandards of scientific rigor [112].AR provides a partial solution tothis problem, as it is, by definition,relevant to practitioners and can

be conducted in a rigorous way. Tobe sure, the scope of relevance ofAR findings to practitioners mayvary. For example, the outcomesof an AR study may be relevant toa single company, if the problemsaddressed through the researchare specific to that company. Theoutcomes may be relevant to awhole industry, if the problems arefaced by all (or most) companies inthe industry; to a whole sector ofthe economy, if the problems arefaced by all (or most) companies inthe sector in question, and so on.

Should business communicationresearchers in general, anddoctoral students in particular,embark on AR projects? My answeris “yes,” if they are aware of therewards and difficulties associatedwith the approach, feel stronglythat the former outweigh the latter,and believe that they can overcomethose difficulties. I hope thatthis paper will help researcherswho are considering using AR toidentify potential difficulties andrewards, and make an informeddecision about whether to use thisapproach.

APPENDIXCODING PROCEDURE USED INTHE DATA ANALYSIS

The process involved in theidentification of causal links basedon the research data collectedwas centered around one of thesources of data, against whichevidence from other sources werematched. Central data sourceswere chosen based on their volumeand perceived degree of coverageof the research topic. In the firststage of the research, in Brazil,the central data sources werefield notes based on participantobservation and unstructuredinterviews. The central datasources in the second researchstage, conducted in New Zealand,were semistructured interviewtranscripts. Below are the maindata analysis steps employed andtheir corresponding coding steps

in grounded theory methodology[79]–[82].

Step 1: Categorizing, or OpenCoding In the categorizing step,I actively sought variables inthe research data associatedwith relevant events. A relevantevent clearly indicated an effectof technology support on a BPIgroup. One example of a relevantevent in the first research phasewas an increase in the number ofBPI groups per unit of time afterthe email conferencing system wasmade available to prospective BPIgroup members. Although othervariables have emerged from theanalysis of the research data, twovariables were initially identifiedas being associated with thisevent—email conferencing supportavailability and organizational BPIgroups capacity. The related effectwas that the first variable causedan increase in the second.

Step 2: Tabulating, or Preparingthe Stage for Axial Coding Inthe tabulating step, I createdtables showing the variation inthe contents of variables alongunits of analysis in either aquantitative or qualitative scale,an approach suggested by Milesand Huberman [83, p. 177] asparticularly useful to preventdata overload from “numbing” theresearcher. Tables were indexedby number and description andsaved into a “tables” MS Word file.One example is a table showingthe variation across differentBPI groups in the content of thevariables “departments involved”(i.e., number of departmentsrepresented in a BPI group) and“scope of change” (i.e., breadth ofthe process changes targeted bya BPI group). In this example, thecontents of the first variable variedalong a quantitative (i.e., numeric)scale, whereas the contexts of thelatter varied along a qualitative(i.e., symbolic) one.

Step 3: Explaining, or AxialCoding In the explaining step,I tried to explain the effectspreviously identified in the

Page 18: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

122 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

categorizing step, using evidencefrom both the categorizing stepitself and the tabulating step.This explanation process wascarried out for each relevanteffect and included the buildingof explanations based on evidencepertaining to the effect. Anillustration of this process isprovided in Fig. 4. The namesand context in this illustrationhave been disguised to protectconfidentiality.

In the illustration in Fig. 4, threeexplanations (E1, E2, and E3)

were derived from the sets ofconfirmatory and disconfirmatoryevidence summarized above them.The evidence is presented inthe form of facts extracted fromdifferent research data sources:1) structured interview transcripts,referred to as IT1, IT2, etc.,indicating each of the interviewtranscript files; 2) tables, referredto as TB1, TB2, etc., indicatingeach of the tables in the tables file;and 3) field notes (i.e., participantobservation notes), referred to asFN1, FN2, etc., indicating each ofthe field notes files. Each reference

to a data source was followed by itspage in the respective file to allowfor quick location, if necessary, ofthe piece of data referenced.

One rule followed throughoutthe research was that a set ofexplanations related to a particulareffect should account for all theevidence related to that particulareffect, whether the evidence wasconfirmatory. In doing so, I followedan approach similar to what isreferred to by Richardson [99,p. 520] as experimental writing,

Fig. 4. Deriving explanations for an effect based on research evidence.

Page 19: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 123

and Eisenhardt [113, p. 541] asshaping hypotheses.

The building of explanationsinitially leads to the identificationof new variables, almost as ifthis analytic process had no end.However, as the researcher moveson through several iterations of thesteps described in this Appendix,the building of explanationsgradually moves into a “synthesisphase” as several variables beginmerging together. A clear indicationthat the analysis is moving towardthis stage is the systematic findingof causes that are the same fordifferent effects, which is aided bythe building of causal models inthe modeling step, as discussednext.

Step 4: Modeling, or SelectiveCoding In the modeling step, Ibuilt explanatory causal modelsbased on the explanationsgenerated in the explaining step.These causal models followedto a large extent the typicalconventions used in previousresearch aimed at building andstructuring knowledge as setsof causal relationships betweenresearch variables [87], [88].They were composed of fourtypes of variables: independent;intervening; moderating; anddependent variables.

The causal model illustratedin Fig. 5 was built based onthe example provided above (inStep 3), where three explanationsaccounted for evidence relatedto the effect that EC-supportavailability has on the perceiveddegree of satisfaction experiencedby members of PI groups. Eachexplanation in Fig. 4 led to adifferent path of links betweenvariables in Fig. 5. ExplanationE1 led to the path linking thevariable EC-support availability,member participation control,member stress and membersatisfaction. Explanation E2 ledto the path linking EC-supportavailability, member participationcontrol, member participation,group interaction, and membersatisfaction. Finally, explanationE3 led to the path linkingEC-support availability, memberfunction disruption, memberparticipation, group interaction,and member satisfaction.

Causal links between variables arerepresented with an arrow pointingtoward the direction of the causallink. Each arrow is drawn with asolid or dotted line. A solid lineindicates that the causal link ispositive, that is, that an increasein the variable at the beginningof the link will contribute to anincrease in the variable at the end

of the link; a dotted line indicatesthat the causal link is negative.

Research variables wererepresented by rectangleswith the name of the variables.Rectangle borders could be eithernormal solid, bold solid, or dotted.Normal solid borders indicatea neutral effect on the variablethey represent, that is, neitheran increase nor a decrease inthe variable. Bold solid bordersindicate an increase in the variable;and dotted borders a decrease.This static type of representationis used in a descriptive, ratherthan a predictive, way. That is, thecausal model showing increasesand decreases in certain variablestries to describe what happenedin a given research context in asummarized way.

When building causal models,I tried to explain the evidenceobtained in the research, ratherthan the lack of evidence. That is,if there was no link connectingtwo variables in a model, it wasbecause there was no evidence forthe existence of the link. Givenmy almost full-time presencein the companies and my deepinvolvement with managementand employees, the absence ofevidence was itself interpreted asan important piece of “evidence.”

Fig. 5. Example of causal diagram.

Page 20: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

124 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

ACKNOWLEDGMENT

This work was supported in partby grants from the Brazilian Min-istry of Science and Technologyand the New Zealand Ministry ofExternal Relations and Trade. Theauthor would like to thank the

anonymous reviewers and, partic-ularly, the Editor, K. S. Campbell,for their many valuable sugges-tions. He would also like to thankthe management and employeesof the organizations described inthis study for their time and sup-port: Bob McQueen for being an

outstanding doctoral dissertationadvisor; and David Avison, RichardBaskerville, Francis Lau, MichaelMyers, John Nosek, and TrevorWood-Harper for sharing manyinteresting research ideas in con-nection with action research.

REFERENCES

[1] C. Argyris, R. Putnam, and D. M. Smith, Action Science. SanFrancisco, CA: Jossey-Bass, 1985.

[2] P. Checkland, Systems Thinking, Systems Practice. New York:Wiley, 1981.

[3] K. Lewin, The Conceptual Representation and the Measurement ofPsychological Forces. Durham, NC: Duke Univ. Press, 1938.

[4] A. F. Marrow, The Practical Theorist: The Life and Work of KurtLewin. New York: Basic Books, 1969.

[5] G. W. Lewin, Ed., Resolving Social Conflicts. New York: Harper &Row, 1948.

[6] K. Lewin, “Action research and minority problems,” in ResolvingSocial Conflicts, G. W. Lewin, Ed. New York: Harper & Row, 1946,pp. 201–216.

[7] M. Elden and R. F. Chisholm, “Emerging varieties of action research,”Human Relations, vol. 46, no. 2, pp. 121–141, 1993.

[8] W. M. Fox, “An interview with Eric Trist, father of the sociotechnicalsystems approach,” J. Appl. Behav. Sci., vol. 26, no. 2, pp. 259–279,1990.

[9] M. Peters and V. Robinson, “The origins and status of action research,”J. Appl. Behav. Sci., vol. 20, no. 2, pp. 113–124, 1984.

[10] R. N. Rapoport, “Three dilemmas in action research,” Human Relations,vol. 23, no. 6, pp. 499–513, 1970.

[11] B. Gustavsen, “Action research and the generation of knowledge,”Human Relations, vol. 46, no. 11, pp. 1361–1365, 1993.

[12] F. Lau, “A review on the use of action research in information systemsstudies,” in Information Systems and Qualitative Research, A. S. Lee,J. Liebenau, and J. I. DeGross, Eds. London, UK: Chapman &Hall, 1997, pp. 31–68.

[13] D. Avison, F. Lau, M. D. Myers, and P. Nielson, “Action research,”Commun. ACM, vol. 42, no. 1, pp. 94–97, 1999.

[14] R. Baskerville, “Distinguishing action research from participative casestudies,” J. Syst. Inform. Technol., vol. 1, no. 1, pp. 25–45, 1997.

[15] , “Investigating information systems with action research,”Communications of The Association for Information Systems, vol. 2, art.19, 1999. [Online]. Available: http://cais.isworld.org.

[16] R. Baskerville and T. Wood-Harper, “A critical perspective on actionresearch as a method for information systems research,” J. Inform.Technol., vol. 11, no. 3, pp. 235–246, 1996.

[17] , “Diversity in information systems action research methods,” Eur.J. Inform. Syst., vol. 7, no. 2, pp. 90–107, 1998.

[18] N. Kock, D. Avison, R. Baskerville, M. Myers, and T. Wood-Harper,“IS action research: Can we serve two masters?,” in Proc. 20th Int.Conf. Inform. Syst., P. De and J. DeGross, Eds., New York, 1999,pp. 582–585.

[19] F. Lau, “Toward a framework for action research in information systemsstudies,” Inform. Technol. People, vol. 12, no. 2, pp. 148–175, 1999.

Page 21: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 125

[20] E. Mumford, “Advice for an action researcher,” Inform. Technol. People,vol. 14, no. 1, pp. 12–27, 2001.

[21] M. D. Myers, “Qualitative research in information systems,” MISQuart., vol. 21, no. 2, pp. 241–242, 1997.

[22] N. Kock and F. Lau, “Information systems action research: Serving twodemanding masters,” Inform. Technol. People (Special Issue Action Res.Inform. Syst.), vol. 14, no. 1, pp. 6–12, 2001.

[23] W. J. Orlikowski and J. J. Baroudi, “Studying information technologyin organizations: Research approaches and assumptions,” Inform. Syst.Res., vol. 2, no. 1, pp. 1–28, 1991.

[24] N. Kock, “Compensatory adaptation to a lean medium: An actionresearch investigation of electronic communication in processimprovement groups,” IEEE Trans. Prof. Commun., vol. 44, no. 4, pp.267–285, 2001.

[25] , “Asynchronous and distributed process improvement: Therole of collaborative technologies,” Inform. Syst. J., vol. 11, no.2, pp. 87–110, 2001.

[26] S. R. Barley, “Images of imaging: Notes on doing longitudinal fieldwork,” Organiz. Sci., vol. 1, no. 3, pp. 220–247, 1989.

[27] U. Schultze, “A confessional account of an ethnography aboutknowledge work,” MIS Quart., vol. 24, no. 1, pp. 43–79, 2000.

[28] J. Van Maanen, Tales of the Field: On Writing Ethnography. Chicago,IL: Chicago Univ. Press, 1988.

[29] E. Phillips and D. Pugh, How to Get a PhD. London, UK: Taylor &Francis, 1994.

[30] N. C. Smith, “The context of doctoral research,” in The ManagementResearch Handbook, N. C. Smith and P. Dainty, Eds. London, UK:Routledge, 1991, pp. 215–226.

[31] K. L. Arnold, The Manager’s Guide to ISO 9000. New York: FreePress, 1994.

[32] W. Minchin, A Quest for Quality: ISO 9000 Standards. Wellington,New Zealand: Working Life Communications, 1994.

[33] F. Voehl, P. Jackson, and D. Ashton, ISO 9000: An ImplementationGuide for Small and Mid-Sized Businesses. Delray Beach, FL:St. Lucie Press, 1994.

[34] R. Davison, “The development of an instrument for measuring thesuitability of using GSS to support meetings,” in Proceedings of thePan Pacific Conference on Information Systems, C. H. Chuan and J.S. Dhaliwal, Eds. Singapore: Dept. of Decision Sci., National Univ.Singapore, 1995, pp. 21–29.

[35] A. R. Dennis and R. B. Gallupe, “A history of group support systemsempirical research: Lessons learned and future directions,” in GroupSupport Systems: New Perspectives, L. M. Jessup and J. S. Valacich,Eds. New York: Macmillan, 1993, pp. 59–77.

[36] A. R. Dennis, B. J. Haley, and R. J. Vanderberg, “A meta-analysis ofeffectiveness, efficiency, and participant satisfaction in group supportsystems research,” in Proc. 17th Int. Conf. Inform. Syst., J. I. DeGross,S. Jarvenpaa, and A. Srinivasan, Eds., New York, 1996, pp. 278–289.

[37] M. Mandivala and P. Gray, “Is IS research on GSS relevant?,” Inform.Resourc. Manage. J., vol. 11, no. 1, pp. 29–37, 1998.

[38] J. F. Nunamaker, Jr., A. R. Dennis, J. S. Valacich, D. R. Vogel, andJ. F. George, “Electronic meeting systems to support group work,”Commun. ACM, vol. 34, no. 7, pp. 40–61, 1991.

[39] J. Sheffield and B. Gallupe, “Using electronic meeting technology tosupport economic development in New Zealand: Short term results,” J.Manage. Inform. Syst., vol. 10, no. 3, pp. 97–116, 1993.

[40] T. H. Davenport, Process Innovation. Boston, MA: Harvard Bus.Press, 1993.

[41] J. Short, E. Williams, and B. Christie, The Social Psychology ofTelecommunications. London, UK: Wiley, 1976.

[42] R. L. Daft and R. H. Lengel, “Organizational information requirements,media richness and structural design,” Manage. Sci., vol. 32, no.5, pp. 554–571, 1986.

Page 22: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

126 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

[43] J. Fulk, J. Schmitz, and C. W. Steinfield, “A social enfluence model oftechnology use,” in Organizations and Communication Technology,J. Fulk and C. Steinfield, Eds. Newbury Park, CA: Sage, 1990,pp. 117–140.

[44] M. L. Markus, “Electronic mail as the medium of managerial choice,”Org. Sci., vol. 5, no. 4, pp. 502–527, 1994.

[45] L. Berkowitz and E. Donnerstein, “External validity is more than skindeep: Some answers to criticisms of laboratory experiments,” Amer.Psychol., vol. 37, no. 3, pp. 245–257, 1982.

[46] E. G. Carmines and R. A. Zeller, Reliability and ValidityAssessment. Beverly Hills, CA: Sage, 1979.

[47] T. D. Cook and D. T. Campbell, “Four kinds of validity,” in Handbookof Industrial and Organizational Psychology, M. D. Dunnette,Ed. Chicago, IL: Rand McNally, 1976, pp. 224–246.

[48] J. I. Cash, Jr. and P. R. Lawrence, Eds., The Information SystemsResearch Challenge: Qualitative Research Methods. Boston, MA:Harvard Bus. School, 1989.

[49] R. Galliers, Ed., Information Systems Research: Issues, Methods andPractical Guidelines. Boston, MA: Blackwell Sci., 1992.

[50] F. Heller, “Another look at action research,” Human Relations, vol. 46,no. 10, pp. 1235–1242, 1993.

[51] P. Reason, “Sitting between appreciation and disappointment: Acritique of the special edition of human relations on action research,”Human Relations, vol. 46, no. 10, pp. 1253–1270, 1993.

[52] L. Chidambaram and B. Jones, “Impact of communication mediumand computer support on group perceptions and performance: Acomparison of face-to-face and dispersed meetings,” MIS Quart., vol.17, no. 4, pp. 465–491, 1993.

[53] R. B. Gallupe, W. H. Cooper, M. Grise, and L. M. Bastianutti,“Blocking electronic brainstorms,” J. Appl. Psychol., vol. 79, no. 1,pp. 77–86, 1994.

[54] S. J. Winter, “The symbolic potential of computer technology:Differences among white-collar workers,” in Proc. 14th Int. Conf.Inform. Syst., J. I. Degross, R. P. Bostrom, and D. Robey, Eds., NewYork, 1993, pp. 331–344.

[55] E. Brynjolfsson and L. Hitt, “Is information systems spendingproductive? New evidence and new results,” in Proc. 14th Int. Conf.Inform. Syst., J. I. Degross, R. P. Bostrom, and D. Robey, Eds., NewYork, 1993, pp. 47–64.

[56] J. Forman and J. Rymer, “The genre system of the Harvard casemethod,” J. Bus. Tech. Commun., vol. 13, no. 4, pp. 373–400, 1999.

[57] M. Alavi, “An assessment of electronic meeting systems in a corporatesetting,” Inform. Manage., vol. 25, no. 4, pp. 175–182, 1993.

[58] E. M. Trauth and B. O’Connor, “A study of the interaction betweeninformation technology and society: An illustration of combinedqualitative research methods,” in Information Systems Research:Contemporary Approaches and Emergent Traditions, H. Nissen, H. K.Klein, and R. Hirschheim, Eds. New York: North-Holland, 1991,pp. 131–143.

[59] D. B. Candlin and S. Wright, “Managing the introduction of expertsystems,” Int. J. Oper. Prod. Manage., vol. 12, no. 1, pp. 46–59, 1991.

[60] N. Kock, R. J. McQueen, and J. L. Scott, “Can action research be mademore rigorous in a positivist sense? The contribution of an iterativeapproach,” J. Syst. Inform. Technol., vol. 1, no. 1, pp. 1–24, 1997.

[61] N. Kock, “The three threats of action research: A discussion ofmethodological antidotes in the context of an information systemsstudy,” Decision Support Systems, to be published.

[62] P. Checkland, “From framework through experience to learning: Theessential nature of action research,” in Information Systems Research:Contemporary Approaches and Emergent Traditions, H. Nissen, H. K.Klein, and R. Hirschheim, Eds. New York: North-Holland, 1991,pp. 397–403.

[63] B. Sommer and R. Sommer, A Practical Guide to BehavioralResearch. New York, NY: Oxford Univ. Press, 1991.

Page 23: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

KOCK: MULTI-ITERATION STUDY OF COMPUTER-MEDIATED COMMUNICATION 127

[64] G. I. Susman and R. D. Evered, “An assessment of the scientificmerits of action research,” Admin. Sci. Quart., vol. 23, no. 4, pp.582–603, 1978.

[65] C. Bunning, Placing Action Learning and Action Research inContext. Brisbane, Australia: Int. Manage. Centre, 1995.

[66] R. McTaggart, “Principles for participatory action research,” AdultEduc. Quart., vol. 41, no. 3, pp. 168–187, 1991.

[67] N. Kock, “Negotiating mutually satisfying IS action research topicswith organizations: An analysis of Rapoport’s initiative dilemma,” J.Workplace Learning, vol. 9, no. 7, pp. 253–262, 1997.

[68] W. E. Deming, Out of The Crisis. Cambridge, MA: Center for Adv.Eng. Study, MIT, 1986.

[69] I. Graham, TQM in Service Industries: A Practitioner’s Manual. NewYork: Tech. Commun., 1992.

[70] K. Ishikawa, Guide to Quality Control. Tokyo, Japan: AsianProductivity Organization, 1986.

[71] J. Juran, Juran on Leadership for Quality. New York: The FreePress, 1989.

[72] M. Raff and J. Beedon, “Total quality management in the west midlandsemployment service,” in Managing Change in the New Public Sector, R.Lovell, Ed. Harlow Essex, UK: Longman, 1994, pp. 294–318.

[73] R. Semler, “Managing without managers,” Harvard Bus. Rev., vol. 67,no. 5, pp. 76–84, 1989.

[74] , Maverick. London, UK: Arrow, 1993.[75] , “Why my former employees still work for me,” Harvard Bus. Rev.,

vol. 72, no. 1, pp. 64–74, 1994.[76] , “Who needs bosses?,” Across The Board, vol. 31, no. 2, pp.

23–26, 1994.[77] T. D. Jick, “Mixing qualitative and quantitative methods: Triangulation

in action,” Admin. Sci. Quart., vol. 24, no. 4, pp. 602–611, 1979.[78] M. C. Lacity and M. A. Janson, “Understanding qualitative data: A

framework of text analysis methods,” J. Manage. Inform. Syst., vol.11, no. 2, pp. 137–155, 1994.

[79] B. G. Glaser, Theoretical Sensitivity: Advances in the Methodology ofGrounded Theory. Mill Valley, CA: Sociology Press, 1978.

[80] , Emergency Versus Forcing: Basics of Grounded TheoryAnalysis. Mill Valley, CA: Sociology Press, 1992.

[81] B. G. Glaser and A. L. Strauss, The Discovery of Grounded Theory:Strategies for Qualitative Research. Chicago, IL: Aldine, 1967.

[82] A. Strauss and J. Corbin, Basics of Qualitative Research: GroundedTheory Procedures and Techniques. Newbury Park, CA: Sage, 1990.

[83] M. B. Miles and A. M. Huberman, Qualitative Data Analysis: AnExpanded Sourcebook. London, UK: Sage, 1994.

[84] R. K. Yin, “The case study crisis: Some answers,” Admin. Sci. Quart.,vol. 26, no. 1, pp. 58–65, 1981.

[85] , “Research design issues in using the case study method tostudy management information systems,” in The Information SystemsResearch Challenge: Qualitative Research Methods, J. I. Cash and P. R.Lawrence, Eds. Boston, MA: Harvard Bus. School, 1989, pp. 1–6.

[86] , Case Study Research. Newbury Park, CA: Sage, 1994.[87] R. P. Bagozzi, Causal Models in Marketing. New York: Wiley, 1980.[88] J. A. Davis, The Logic of Causal Order. London, UK: Sage, 1985.[89] H. J. Arnold, “Moderator variables: A clarification of conceptual,

analytic, and psychometric issues,” Org. Behav. Human Perform., vol.29, no. 4, pp. 143–174, 1982.

[90] U. Sekaran, Research Methods for Managers. New York: Wiley, 1984.[91] R. M. Baron and D. A. Kenny, “The moderator-mediator variable

distinction in social psychological research: Conceptual, strategic,and statistical considerations,” J. Personal. Social Psychol., vol. 51,no. 6, pp. 1173–1182, 1986.

[92] P. Reason, “The co-operative inquiry group,” in Human Inquiry inAction, P. Reason, Ed. Newbury Park, CA: Sage, 1988, pp. 18–39.

Page 24: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,

128 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46, NO. 2, JUNE 2003

[93] A. L. Strauss and J. M. Corbin, Basics of Qualitative Research:Techniques and Procedures for Developing Grounded Theory. ThousandOaks, CA: Sage, 1998.

[94] L. D. Ketchum and E. Trist, All Teams Are Not Created Equal. NewburyPark, CA: Sage, 1992.

[95] A. Clement, “Computing at work: Empowering action by low-levelusers,” Commun. ACM, vol. 37, no. 1, pp. 53–63, 1994.

[96] J. Champy, Reengineering Management. New York: Harper Bus.,1995.

[97] M. L. Markus, “Case selection in a disconfirmatory case study,” inThe Information Systems Research Challenge: Qualitative ResearchMethods, J. I. Cash and P. R. Lawrence, Eds. Boston, MA: HarvardBus. School, 1989, pp. 20–26.

[98] R. Sommer, “Serving two masters,” J. Consumer Affairs, vol. 28, no.1, pp. 170–187, 1994.

[99] L. Richardson, “Writing: A method of inquiry,” in Handbook ofQualitative Research, N. K. Denzin and Y. S. Lincoln, Eds. NewburyPark, CA: Sage, 1994, pp. 516–529.

[100] P. Gray and M. Mandivala, “New directions for GDSS,” Group Decisionand Negotiation, vol. 8, no. 1, pp. 77–83, 1999.

[101] D. T. Campbell and J. C. Stanley, Experimental and Quasi-ExperimentalDesigns for Research. Boston, MA: Houghton Mifflin, 1963.

[102] P. Reason, Ed., Human Inquiry in Action. Newbury Park, CA:Sage, 1988.

[103] R. A. Hirschheim, “Information systems epistemology: An historicalperspective,” in Research Methods in Information Systems, E. Mumford,Ed. New York: North-Holland, 1985, pp. 13–35.

[104] H. K. Klein and M. D. Myers, “A set of principles for conducting andevaluating interpretive field studies in information systems,” MISQuart., vol. 23, no. 1, pp. 67–93, 1999.

[105] J. Teichman and K. C. Evans, Philosophy: A Beginner’sGuide. Oxford, UK: Blackwell, 1995.

[106] R. Rosenthal and R. L. Rosnow, Essentials of Behavioral Research:Methods and Data Analysis. Boston, MA: McGraw-Hill, 1991.

[107] S. Siegel and N. J. Castellan, Nonparametric Statistics for theBehavioral Sciences. Boston, MA: McGraw-Hill, 1998.

[108] K. R. Popper, Logic of Scientific Discovery. New York: Routledge, 1992.[109] N. Kock, “Changing the focus of business process redesign from

activity flows to information flows: A defense acquisition application,”Acquisition Rev. Quart., vol. 8, no. 2, pp. 93–110, 2001.

[110] A. S. Lee, “Rigor and relevance in MIS research: Beyond the approachof positivism alone,” MIS Quart., vol. 23, no. 1, pp. 29–33, 1999.

[111] T. H. Davenport and M. L. Markus, “Rigor vs. relevance revisited:Response to Benbasat and Zmud,” MIS Quart., vol. 23, no. 1, pp.19–23, 1999.

[112] M. L. Markus and A. S. Lee, “Special issue on intensive researchin information systems: Using qualitative, interpretive, and casemethods to study information technology,” MIS Quart., vol. 23, no. 1,pp. 35–38, 1999.

[113] K. M. Eisenhardt, “Building theory from case study research,” Acad.Manage. Rev., vol. 14, no. 4, pp. 532–550, 1989.

Ned Kock is an associate professor in the College of Business and Economics at Lehigh

University. He holds a Ph.D. in information systems from the University of Waikato,

New Zealand. Ned is the author or co-author of four books, including the general

interest book Compensatory Adaptation: Understanding How Obstacles Can Lead to

Success and the business-oriented book Process Improvement and Organizational

Learning: The Role of collaboration Technologies. He has also authored or co-authored

over 80 articles published in refereed journals and conference proceedings.

Page 25: Action research: Lessons learned from a multi-iteration ...cits.tamiu.edu/nedkock/Pubs/2003JournalIEEETPC2/Kock2003.pdf · IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 46,