Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual...

25
Ta^-Technology Fit Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per- formance, impact of information technology ISRL Categories: AA03, DC02. DC06. EI02, EL03, GB02 By: Dale L. Goodhue Information and Decision Sciences University of Minnesota 271 19th Ave. South Minneapolis, MN 55455 U.S.A. [email protected] Ronald L. Thompson School of Business Administration University of Vermont Burlington, VT 05405 U.S.A. [email protected] Abstract A key concem in Information Systems (IS) re- search has been to better understand the link- age between infomiation systems and individual performance. The research reported in this study has two primary objectives: (1) to propose a comprehensive theoretical model that incor- porates valuable insights from two comple men- tary streams of research, and (2) to empirically test the core of the model. At the heart of the new model is the assertion that for an informa- tion technology to have a positive impact on in- dividual performance, the technology: (1) must be utilized and (2) must be a good fit with the tasks it supports. This new model is moderately supported by an analysis of data from over 600 individuals in two companies. This research high- lights the importance of the fit between technol- ogies and users' tasks in achieving individual performance impacts from information tech- nology. It also suggests that task-technology fit, when decomposed into its more detailed com- ponents, could be the basis for a strong diag- nostic tool to evaluate whether infonnation sys- tems and services in a given organization are meeting userneeds. Introduction The linkage between information technology and individual performance has been an on- going concern in IS research. This article pre- sents and tests a new. comprehensive model of this linkage by drawing on insights from two complementary streams of research (user atti- tudes as predictors of utilization and task-tech- nology fit as a predictor of performance). The essence of this new model, called the Technoi- ogy-to-Performance Chain (TPC), is the asser- tion that for an information technology to have a positive impact on individual performance, the technoiogy must be utilized, and the technology must be a good fit with the tasks it supports. This new model is consistent with one proposed by DeLone and McLean (1992) in that both utili- zation and user attitudes about the technology lead to individual performance impacts. It goes beyond the DeLone and McLean mode) in two important ways. First, it highlights the impor- tance of task-technology fit (TTF) in explaining how technology leads to performance impacts. Task-technology fit is a critical constnjct that was missing or only implicit in many previous models. Second, it is more explicit concerning the links between the constructs, providing a stronger theoretical basis for thinking about a number of issues relating to the impact of IT on performance. These include: making choices for surrogate measures of MIS success,^ under- standing the impact of user involvement on per- formance, and developing better diagnostics for IS problems. ' "MIS Success" is variously described as improved productivity (Bailey and Pearson, 1983), changes in organizational effectiveness, utility in decision making (Ives, etai, 1983), higher relative value ornet utiiity of a means ot inquiry (Swanson, 1974: 1982), etc. Ttius, MIS success ultimately coresponds to what DeLone and McLean (1992) label individual impact or organizational impact. For our purposes, the paper focuses on individual performance impacts as the dependent variable of interest. MIS Quarterly/June 1995 213

Transcript of Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual...

Page 1: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Ta^-Technology Fit

Task-TechnologyFit and IndividualPerformance

Keywords: Task-technology fit, individual per-formance, impact of information technology

ISRL Categories: AA03, DC02. DC06. EI02,EL03, GB02

By: Dale L. GoodhueInformation and Decision SciencesUniversity of Minnesota271 19th Ave. SouthMinneapolis, MN [email protected]

Ronald L. ThompsonSchool of Business AdministrationUniversity of VermontBurlington, VT [email protected]

Abstract

A key concem in Information Systems (IS) re-search has been to better understand the link-age between infomiation systems and individualperformance. The research reported in this studyhas two primary objectives: (1) to propose acomprehensive theoretical model that incor-porates valuable insights from two comple men-tary streams of research, and (2) to empiricallytest the core of the model. At the heart of thenew model is the assertion that for an informa-tion technology to have a positive impact on in-dividual performance, the technology: (1) mustbe utilized and (2) must be a good fit with thetasks it supports. This new model is moderatelysupported by an analysis of data from over 600individuals in two companies. This research high-lights the importance of the fit between technol-ogies and users' tasks in achieving individualperformance impacts from information tech-nology. It also suggests that task-technology fit,when decomposed into its more detailed com-ponents, could be the basis for a strong diag-nostic tool to evaluate whether infonnation sys-tems and services in a given organization aremeeting userneeds.

IntroductionThe linkage between information technologyand individual performance has been an on-going concern in IS research. This article pre-sents and tests a new. comprehensive model ofthis linkage by drawing on insights from twocomplementary streams of research (user atti-tudes as predictors of utilization and task-tech-nology fit as a predictor of performance). Theessence of this new model, called the Technoi-ogy-to-Performance Chain (TPC), is the asser-tion that for an information technology to have apositive impact on individual performance, thetechnoiogy must be utilized, and the technologymust be a good fit with the tasks it supports.

This new model is consistent with one proposedby DeLone and McLean (1992) in that both utili-zation and user attitudes about the technologylead to individual performance impacts. It goesbeyond the DeLone and McLean mode) in twoimportant ways. First, it highlights the impor-tance of task-technology fit (TTF) in explaininghow technology leads to performance impacts.Task-technology fit is a critical constnjct thatwas missing or only implicit in many previousmodels. Second, it is more explicit concerningthe links between the constructs, providing astronger theoretical basis for thinking about anumber of issues relating to the impact of IT onperformance. These include: making choices forsurrogate measures of MIS success,^ under-standing the impact of user involvement on per-formance, and developing better diagnostics forIS problems.

' "MIS Success" is variously described as improvedproductivity (Bailey and Pearson, 1983), changes inorganizational effectiveness, utility in decision making (Ives,e ta i , 1983), higher relative value ornet utiiity of a means otinquiry (Swanson, 1974: 1982), etc. Ttius, MIS successultimately coresponds to what DeLone and McLean (1992)label individual impact or organizational impact. For ourpurposes, the paper focuses on individual performanceimpacts as the dependent variable of interest.

MIS Quarterly/June 1995 213

Page 2: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Tecbnology Fit

This paper describes the techno!ogy-to-perform-ance chain modei. and its major relationshipsare tested empiricaily using data from over 600individuais using 25 different information tech-nologies and working in 26 different depart-ments in two companies.

Models Linking Technologyand PerformanceDescribed beiow are the two research streamsmentioned eariier and the limitations of relyingcompletely on either one alone.

Utilization focus research

The first {and most common) ofthe two comple-mentary research streams on which the TPC isbased is the "utilization focus" stream. Thisstream employs user attitudes and beliefs topredict the utilization of information systems(e.g..Cheney, et al.. 1986; Davis, 1989; Davis,et al., 1989; Doll and Torkzadeh, 1991; Lucas.1975; 1981; Robey, 1979; Swanson, 1987;Thompson, et ai.. 1991). The top model in Figure1 shows a rough model of the way in whichtechnology is said to affect performance in thisresearch.

Most of the utilization research is based ontheories of attitudes and behavior {Bagozzi,1982; Fishbein and Ajzen. 1975; Triandis,1980). Aspects of the technology {for example,high quality systems {Lucas. 1975) or charge-back policies {Olson and Ives. 1982)) lead touser attitudes {beliefs, affect) about systems {forexample, usefulness {Davis. 1989) or user infor-mation satisfaction {Baroudi, et a l . 1986)). Userattitudes, along with social norms {Hartwick andBarki, 1994; Moore and Benbasat, 1992) andother situational factors, lead to intentions to util-ize systems and ultimately to increased utiliza-tion. Stated or unstated, the implication is thatincreased utilization will lead to positive perform-ance impacts.

Task-technology fit focus research

A smaller number of researchers have focusedon situations where utilization can often be as-

sumed anct have argued that performance im-pacts will result from task-technology fit—that is.when a technology provides features and sup-port that "fit" the requirements of a task. Thisview is shown by the middle model In Figure 1.in which fit determines performance {and some-times utilization) but without the richer model ofutilization from above as a critical predictor ofperformance.

The "fit" focus has been most evident in re-search on the impact of graphs versus tables onindividual decision-making performance. Twostudies report that over a series of laboratoryexperiments, the impact of data representationon performance seemed to depend on fit withthe task (Benbasat, et al., 1986; Dickson, et al..1986). Another study proposes that mismatchesbetween data representations (a technologycharacteristic) and tasks would slow decision-making performance by requiring additionaltranslations between data representations or de-cision processes {Vessey, 1991). Still othersfound strong support for this linkage between"cognitive fit" and performance in laboratory ex-periments {Jarvenpaa. 1989; Vessey, 1991).

The case has been made for a more general"fit" theory of tasks, systems, individual charac-teristics, and performance (Goodhue, 1988)This study proposes that information systems(systems, policies, IS staff, etc.) have a positiveimpact on performance only when there is cor-respondence between their functionality and thetask requirements of users.

There have also been links suggested betweenfit and utilization (shown to the dotted arrow inthe middle model of Figure 1). At the organiza-tional level "fit" and utilization or adoption havebeen linked {Cooper and Zmud, 1990; Tor-natzky and Klein, 1982). At the individual level.a "system/work fit" construct has been found tobe a strong predictor of managerial electronicworkstation use {Floyd, 1986; 1988).

Limitations ofthe utilization focusmodel

While each of these perspectives gives insight intothe impact of information technology on per-formance, each alone has some important limi-tations. First, utilization is not always voluntary

214 /W/S Quarterly/June 1995

Page 3: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

Q)OcroEotQ>

a_

in

act

Q .

t l

CO

O

o2̂

C

oro_N: ^

—^

"o

—QJCQ

cj"QJ

_oaj

3=

1cnoocr~O

COO

"to

CD

roro

6

oCON

'

Z)

CO

ooLL

QJOcro

Q_

in"oroQ .

E

ask

Olo

inol

Fit

Tec

oroN

log-

oco(D

1-

o

risti

"^

cax:O

o.•̂ oLL LL

COro1 -

Oloo .-t̂c Li-

Tech

Q>CJcroE

•goQ-

co

act

Q.E

oocx:o

ics

1

"to

oCO

ro

6

M/S Quarterly/June 1995 215

Page 4: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

For many system users, utilization is more afunction of how jobs are designed than the qual-ity or usefulness of systems, or the attitudes ofusers toward using them. To the extent that utili-zation is not voluntary, performance impacts willdepend increasingly upon task-technology fitrather than utilization.

Second, there is little explicit recognition thatmore utilization of a system wiil not necessarilylead to higher performance. Utilization of a poorsystem (i.e., one with low TTF) will not improveperformance, and poor systems may be utilizedextensively due to social factors, habit, igno-rance, availability, etc., even when utilization isvoluntary. For example, a study involving IRSauditors found tiiat even though they have posi-tive attitudes toward Personal Computers (PCs)and use them extensively, utilization has littlepositive impact on performance, and possiblynegative impacts (Pentland, 1989). The sug-gested reason for this was because PCs andtheir software were a poor fit to the task portfolioofthe auditors (Pentland, 1989).

Limitations of fit focus models

Models focusing on fit alone do not give suffi-cient attention to the fact that systems must beutilized before they can deliver performance im-pacts. Since utilization is a complex outcome,based on many other factors besides fit (suchas habit, social norms, and other situational fac-tors), the fit model can benefit from the additionof this richer understanding of utilization and itsimpact on performance. The bottom model fromFigure 1 shows the two perspectives combined,witii performance determined jointly by utiliza-tion and TTF.

A New Model: TheTechnology-to-PerformanceChainFigure 2 shows a more detailed picture of thecombination of theories focusing on utilizationand task-system fit. This technology-to-perform-ance chain (TPC) is a model of the way in whichtechnologies lead to performance impacts at the

individual level. By capturing the insights ofboth lines of research and recognizing that tech-nologies must be utilized and fit the task theysupport to have a performance impact, thismodel gives a more accurate picture ofthe wayin which technologies, user tasks, and utilizationrelate to changes in performance. The major fea-tures of the full model in Figure 2 are describedbelow before the focus is narrowed to a reducedmodel that is more easily tested empirically,

Technologies are viewed as tools used by indi-viduals in carrying out their tasks. In the contextof information systems research, technology re-fers to computer systems (hardware, software,and data) and user support services (training,help lines, etc.) provided to assist users in theirtasks. The model is intended to be generalenough to focus on either the impacts of a spe-cific system or the more general impacts of theentire set of systems, policies, and services pro-vided by an IS department.

Tasks are broadly defined as the actions carriedout by individuals in turning inputs into outputs.'^Task characteristics of interest include thosethat might move a user to rely more heavily oncertain aspects of the information technologyFor example, the need to answer many variedand unpredictable questions about company op-erations would move a user to depend moreheavily upon an information system's capacityto process queries against a database of opera-tional information.

Individuals may use technologies to assistthem in the performance of their tasks. Charac-teristics of the individual (training, computer ex-perience, motivation) could affect how easilyand well he or she will utilize the technology.

Task-technology fit (TTF) is the degree towhich a technoiogy assists an individual in per-forming his or her portfolio of tasks. More spe-

An earlier version ot this model was fifSt presented byGoodhue (1992).

There is potentiai for some confusion in tenninoiogy hereOrganizationai researchers sometimes derine technoiogyquite broadly as actions used to transform inputs into outputs(eg , Perrow. 1967. Fry and Slocum, 1984) That is,technologies are the tasks of indrviduais producing outputsThis paper differentiates lechnologies from tasks.

216 MiS Quarterly/June 1995

Page 5: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

otan•oo0)

(DOC COra "QE ra^ Q-

EQ_

U(D

d)11

W.ra1—

tooto' l _

CD*^orax:O

ooco

CO_oVi'^

raJZ

O

o.cQ)CQT3

CO«

c.g

liz;

'*̂

*oCO

;urs

oi

1

CO(DOC3crCOcoOXJ

CUQ .XUJ

:eli

Cu

coro

Util

iz

>*—O

d)CO

•pra

ect t

o

3 LUJ

O

cial

N

o

O

'"6coOCD

bit,

cilit

at

ro CO

X U-

to

ori

CD

uIU

cra

a.

iZ

M/S Ouarteriy/June T995 217

Page 6: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

cifically, TTF is the correspondence betweentask requirements, individual abilities, and thefunctionality of the technology.'*

The antecedents of TTF are the interactionsbetween task, technology, and individual. Cer-tain kinds of tasks (for example, interdependenttasks requiring information from many organiza-tional units) require certain kinds of technologi-cal functionality (for example, integrated data-bases with all corporate data accessible to all).As the gap between the requirements of a taskand the functionalities of a technology widens,TTF is reduced. Starting with the assumptionthat no system provides perfect data to meetcomplex task needs without any expenditure ofeffort (i.e.. there is usually some non-zero gap),we believe that as tasks become more demand-ing or technologies offer less functionality, TTFwill decrease (Goodhue. forthcoming).

Utilization is the behavior of employing thetechnology in completing tasks. Measures suchas the frequency of use or the diversity of appli-cations employed (Davis, et al.. 1989;Thompson, et al., 1991; 1994) have been used.However, the construct is arguably not yet wellunderstood, and efforts to refine the conceptu-alization should be grounded in an appropriatereference discipline (Trice and Treacy, 1988).

Since the lower portion of the TPC model in Fig-ure 2 is derived from other theories about atti-tudes (beliefs or affect) and behavior (Bagozzi1982; Fishbein and Ajzen, 1975; Triandis, 1980), itwould seem an appropriate reference discipline.Consider the utilization of a specific system for asingle, defined task in light of those theories.Beliefs about the consequences of use, affecttoward use, social norms, etc. would lead to theindividual's decision to use or not use the sys-tem. In this case, utilization should be conceptu-alized as the binary condition of use or no-use.We would not be interested in how long the indi-vidual used the system at this single, definedtask, since length of use would be a conse-

* Perhaps a more accurate label for the construct would betask-individua!-technology fit, but the simpler TTF iabei iseasier to use

quence of the size of the task and/or the TTF ofthe system, not the choice to use the system.

If the focus is expanded to include a portfolio ofsome number of tasks (such as in a field studyof information systems use), then the appropri-ate conceptualization would be the proportion oftimes the individual decided to use the system(the sum of the decisions to use, divided by thenumber of tasks). Note that this is quite differentfrom conceptualizing utilization as the length oftime or the frequency with which a system wasused. Knowing that an individual decided to usea system three times means one thing if therewere only four tasks, but something else if therewere 20 tasks.

The antecedents of utilization can be sug-gested by theories about attitudes and behavior,as described above. Note that both voluntaryand mandatory utilization are reflected in themodel. Mandatory use can be thought of as asituation where social norms to use a systemare very strong and overpower other considera-tions such as beliefs about expected conse-quences and affect.

The impact of TTF on utilization is shown viaa link between task-technology fit and beliefsabout the consequences of using a system. Thisis because TTF should be one important deter-minant of whether systems are believed to bemore useful, more important, or give more rela-tive advantage. All of these related constructshave been shown to predict utilization of sys-tems (Davis, 1989; Hartwick and Barki. 1994;Moore and Benbasat, 1992), though they arenot the only determinant, as the model shows.

Perfomiance impact in this context relates tothe accomplishment of a portfolio of tasks by anindividual. Higher performance implies somemix of improved efficiency, improved effective-ness, and/or higher quality. As shown in Figure2, not only does high TTF increase the likeli-hood of utilization, but it also increases the per-formance impact of the system regardless ofwhy it is utilized. At any given level of utilization,a system with higher TTF will lead to better per-formance since it more closely meets the taskneeds of the individual.

Feedbaci< is an important aspect of the model.Once a technology has been utilized and per-formance effects have been experienced, there

218 MIS Quarterly/June 1995

Page 7: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

will inevitably be a number of kinds of feedback.First, the actual experience of utilizing the tech-nology may lead users to conclude that thetechnology has a better (or worse) impact onperformance than anticipated, changing their ex-pected consequences of utilization and there-fore affecting future utilization. The individualmay also learn from experience better ways ofutilizing the technology, improving individual-technology fit, and hence the overall TTF.

Proposition 2: User evaluations of task-technology fit w\\\ influence the utilizationof information systems by individuals.

Proposition 3: User evaluations of task-technology fit will have additional explan-atory power in predicting perceivedperformance impacts beyond that fromutilization alone.

A reduced model for testing

The TPC is a large model and difficult to test ina single study. Arguably, portions of it have al-ready been tested by a variety of researchers.Support for a "fit" relationship between taskcharacteristics, technology characteristics, andindividual characteristics on the one hand, anduser evaluations of TTF on the other (i.e.. thetop portion of Figure 2) was found by Goodhue(forthcoming). Support for the link between TTFand performance (i.e.. the top portion of Figure2 plus performance impacts) was found byJarvenpaa (1989) and Vessey (1991). Supportfor the precursors of utilization (i.e., the bottombox) has been found by Adams, et al. (1992),Davis (1989). Davis, et al. (1989). Mathieson(1991). and Thompson, et al. (1991; 1994).None of this work has been tested across thefull scope ofthe model.

The goal of our study was to test across all thecore components of the model, from task andtechnology to performance impacts, with a par-ticular emphasis on the role of task-technologyfit. Figure 3 shows the reduced model to betested. The biggest change from Figure 2 to Fig-ure 3 is the direct link from TTF to utilization inFigure 3. This is based on two important as-sumptions: first, that TTF will strongly influenceuser beliefs about consequences of utilization;and second that these user beliefs will have aneffect on utilization. Specifically, we tested thefollowing propositions (see Figure 3);

Proposition 1 : User evaluations of task-technology fit will be affected by bothtask characteristics and character-istics ofthe technoiogy.

Methodology

Research design

As is common in this type of research, we faceda decision of whether to test the TPC modelwithin a narrowly controlled domain and gener-alize to a more global domain, or to test themodel in a more generalized domain. A more nar-rowly controlled domain would have removedextraneous influences, but made generalizationmore difficult. We decided to focus at a moremacro level and to span multipie technologies,multiple tasks, multiple types of users, and mul-tiple organizational settings. Thus, we were test-ing to see whether a general measure of TTF (atthe individual level) would exhibit the relationssuggested by the TPC model. If it did, then wewould have demonstrated support for the TPCmodel at a very high level of generalization.

The sample included over 600 users, employing25 different technologies, working in 26 differentnon-IS departments in two very different organi-zations. The sample spanned the organizationalhierarchy from administrative/clerical staff tovice president and up. In company A (a trans-portation enterprise) questionnaires were sentout to approximately 1200 users (a randomsample of a major fraction of the company'snon-union, non-IS employees, stratified by de-partment). A total of 400 questionnaires werecompleted and returned to a company repre-sentative, for a response rate of approximately33 percent.

For company B (an insurance company), thequestionnaire was delivered to a majority ofnon-IS employees. Employees were given 30minutes of company time to complete the sur-vey. A total of 262 were returned, for a gross

MIS Quarterly/June 1995 219

Page 8: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

OoC </)CO t tc ^C CO

rECDQ.

^ CO \

4-»

L>»OOco

1-CA

1 ^

CL

CMQ_

" -̂

co

" • * - •

CO

/ * ^l-d

\te

d

V)

1

CQO

t \\JO)ucio

6cnoocu

the

Tei

set o

f 1

oocx:ud)

1 -

coo«0)oCO

CO

3

tZ

220 MIS Quarteriy/June 1995

Page 9: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Techndogy Fit

response rate of 93 percent. The total usable re-spondents from both companies was 662.

Measures, measurement validity,reliability

Where possible, measures were adapted fromprevious research. Because of a lack of ade-quate measurement scales, however, it wasnecessary to develop and refine some meas-ures specifically for this study.

Task-technology fit has been measured byGoodhue (1993; forthcoming) within the usertask domain of IT-supported decision making.From Goodhue's instrument we borrowed multi-ple questions on each of 14 dimensions of TTFaddressing the extent to which existing informa-tion systems support the identification, access,and interpretation of data for decision making.

To expand the task domain somewhat, two ad-ditional IT-supported user tasks were added: (1)responding to changed business requirementswith new and modified systems, and (2) execut-ing day-to-day business transactions. For thesetwo new tasks multiple questions were devel-oped on each of seven new dimensions ad-dressing the extent to which IS meets user taskneeds: having sufficient understanding of thebusiness, having sufficient interest and dedica-tion, providing effective technical and businessplanning assistance, delivering agreed-upon so-iutions on time, responsiveness on requests forservices, production timeliness, and impact of ISpolicies and standards on ability to do the job.Altogether this resulted in 48 questions measur-ing 21 dimensions of TTF.

A concern with the two-company sample described above isthat the model may apply so differently in the two companiesthat it is inappropriate to pool the data for a single analysis.We used Neter and Wasserman's {1974. p. 160-161; seealso Pedhazur, 1962, pp, 436-450) test for the equivalence oftwo regression lines to test whether it is appropriate to poolthe data from the two companies. This involves testing a fullmodel giving each company its own intercept and betavalues, and comparing that to a restricted model with a singleintercept and a single set of shared beta values. This testwas performed for the regressions predicting utilization andperformance impacts. In neither case was the improvementin fit for the fuli model significant at the .05 level, supportingour approach of pooling the data.

Based on an assessment of the reliability anddiscriminant validity of the questions, 14 ques-tions (and 5 dimensions) were dropped as beingunsuccessfully measured.^ Using a principalcomponents factor analysis with promax rota-tion, the remaining 34 questions (including 16 ofthe 21 original dimensions) were collapsed intoeight clearly distinct factors of TTF. For all ques-tions, factor loadings were at least .50 on theprimary factor, and no more than .45 on anysecondary factor. For only one question was thedifference between the primary and the secon-dary loading less than .20, and in this one casethe difference was .10.

Table 1 shows the mapping from the 16 remain-ing dimensions of TTF to the eight final TTF fac-tors, as well as the Cronbach's alpha reliabilitiesfor the eight factors, ranging from .60 to .88.This grouping of dimensions seemed quite rea-sonable, since similar dimensions were collect-ed into more aggregate but still coherent TTFfactors. The final eight components of TTF thatwere successfully measured included (1)dataquality; (2) locatability of data; (3) authorizationto access data; (4) data compatibility (betweensystems); (5) training and ease of use; (6) produc-tion timeliness (IS meeting scheduled opera-tions); (7) systems reliability; and (8) IS relation-ship with users. The first five factors focused onmeeting task needs for using data in decisionmaking. The next two focused on meeting day-to-day operational needs, and the last focusedon responding to changed business needs. Thesuccessful TTF questions are listed in the Ap-pendix, Part A.

Task characteristics and their impact on infor-mation use have been studied by a great manyresearchers (e.g., Culnan, 1983; Daft andMacintosh. 1981; O'Reilly 1982). Following Fryand Slocum's (1984) suggestion of a generalcharacterization of tasks. Goodhue (forthcom-ing) combined Perrow's (1967) and Thompson's(1967) dimensions and successfully measured atwo-dimensional construct of task characteris-tics: non-routineness (iack of analyzable searchbehavior) and interdependence (with other or-ganizational units).

Details of the analysis of the measurement validity of allmeasures, as well as a correlation matrix, are available ftomthe authors upon request.

MIS Quarterly/June 1995 221

Page 10: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technotogy Fit

Table 1. Results of Factor Analysis:16 Original Task-Technology Fit Dimensions"

And 8 Final Task-Technology Fit Factors

8 Final TTF FactorsQuality

Locatability

AuthorizationCompatibilityEase of Use/Training

Production TimelinessSystems ReliabilityRelationship With Users

16 Onglnal 11 h uimensions(After poor questions dropped)

Currency of the dataRight data is maintainedRight level of detailLocatabilityMeaning of data is easy to find out

Authorization for access to dataData compatibility

Ease of UseTraining

Production Timeliness

Systems ReliabilityIS understanding of businessIS interest and dedicationResponsivenessDelivering agreed-upon solutionsTechnical and business planning assistance

Cronbach'sAlpha

.84

.75

.60

.70

.74

.69

.71

.88

' After 5 ofthe original 21 TTF dimensions were dropped as unsuccessfully measured.

Five measures of task characteristics (threequestions on non-routineness and two on inter-dependence) were adopted from Goodhue's(forthcoming) study, as shown in the Appendix.Part B. A factor analysis separated the ques-tions into two factors with all questions loadingat least .51 on their primary factor, and no morethan .36 on their secondary factor. Cronbach'salpha reliabilities were .73 and .76 for non-rou-tineness and interdependence respectively.

In addition to these general characteristics oftasks, several researchers have suggestedmanagerial level as a determinant of user evalu-ations of IS (e.g., Cronan and Douglas, 1990;Franz and Robey. 1986). tt is certainly true thatthe kinds of tasks users engage in (and the de-mands they make on their information systemsand service providers) should vary considerablyfrom clerical staff to low-level managers tohigher-level managers. As a pragmatic proxy tocapture these kinds of task differences, dummyvariables were used for each of eight groupingsof job title. Job titles in the two companies areshown in Tabte 2, matched where possibleacross the two companies. Though no specifichypotheses were made, we expected that differ-ences in job title would affect user evaluations ofTTF.

Technology characteristics facing users couldbe measured along a number of dimensions-With this measure we focused on two proxiesfor the underlying characteristics of the technol-ogy of information systems: first, the informationsystems used by each respondent, and second,the department of the respondent. The two or-ganizations provided a large range of informa-tion systems for their employees. As part of thecustomization of the questionnaire, about 20major systems in each company were identified.Each respondent identified up to five of thesethat they actually used. Twenty-five major sys-tems (13 in Company A and 12 in Company B)were used by a minimum of five employees.

Rather than try to define each system in termsof its characteristics, we made the simplifyingassumption that the characteristics of any givensystem are the same for all who use that sys-tem. For respondents who used only a singlesystem, the characteristics were captured by adummy variable (1 indicates use of this system;0 indicates no use). Where respondents usedmore than one system, the dummy variableswere weighted. The weighting was accom-plished by simpiy dividing 1 by the number ofmajor systems used. For example, a respondentwho used three major systems would receive a

222 MIS Quarterly/June 1995

Page 11: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

weighting of .33 for each of these three and aweighting of zero for all other systems. This ap-proach allowed us to capture inherent differ-ences between technologies without having toexplicitly define those differences. In effect, thecollection of dummy variables for system wasused as a proxy for different, unspecified systemcharacteristics.

The depaiiment of the respondent was alsoused as a second proxy measure for the char-acteristics of information systems. IS depart-ments may have differentiated between userdepartments in terms of attention, emphasis, pri-ority, and relationship management, perhapsbecause of the organization's strategic directionor historic inertia. These differences could haveaffected the level of service experienced by re-spondents in the different departments. A set ofdepartmental dummy variables was used tocapture the potentially different levels of atten-tion paid by IS departments to each of 26 dis-tinct user departments.

These were somewhat crude measures of char-acteristics of information systems and services,involving a great many dummy variables. Givenour relatively large sample size, however, thesemeasures did allow us to test the assertion thatuser evaluations were a function of the underly-ing systems used and the departments userswere in.

Utilization should ideally be measured as theproportion of times users choose to utilize sys-tems. Unfortunately, this proportion was ex-tremely difficult to ascertain in a field study. Inaddition, there was also the problem of manda-tory use. In many field situations, use of a sys-tem may be mandated as part of a jobdescription. For example, a claims processorwith the insurance company (Company B) had

no choice but to use the system provided by hisor her IS department. Regardless of the claimsprocessor's evaluation of the system, it was notpossible to process claims without using it.

Our solution was to conceptualize utilization asthe extent to which the information systems havebeen integrated into each individual's work rou-tine, whether by individual choice or by organ-izational mandate. This reflected the individual(or organizational) choice to accept the systems,or the institutionalization of those systems.

We operationatized this by asking users to ratehow dependent they were on a list of systemsavailable in their organizations. Respondentsselected up to five systems that were majorsources of information for them personally andself-reported on system-specific dependence.Dependence on each system was rated on athree-point scale (0— not very dependent; 1 —somewhat dependent; 2—very dependent).Overall dependence on systems (our measureof utilization) was calculated as the total de-pendence reported on up to five systems (thesum of the dependence responses).

Performance impact was measured by per-ceived performance impacts since objectivemeasures of performance were unavailable inthis field context, and at any rate would not havebeen compatible across individuals with differenttask portfolios. Three questions were used thatasked individuals to self-report on the perceivedimpact of computer systems and services ontheir effectiveness, productivity, and perform-ance in their job. Low correlations between onequestion and the others (.23 and .21) suggestedthat it was measuring something quite differentfrom the other two (in this case, problems withthe IS department as opposed to impact of sys-tems on performance). This third question was

Table 2.

Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8

Matched Job Titles Across the Two Companies

Transportation CompanyAdministrative staffAnalystSupervisor/asst. managerManager/asst. directorDirector/asst. superintendentSuperintendent/general super.

Trainmaster, Roadmaster

Insurance CompanyClerical staffTechnicalSupervisorManagerDirectorVP and upProfessional level

MIS Quarterly/June 1995 223

Page 12: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

Table 3. Tests of the Influence of Task and System on TTF:Results of Regression Analyses

TTF FactorRelationshipQualityTimelinessCompatabilityLocatabilityEasen"rainingReliabilityAuthority

Non-Routine^-.11- .27 ' "-.12-.37"*-.26"*- .15".00

-.29"*

Inter-Dependence'01

-.04-.04- .11 '.04

-.03- .13"-.03

JDb20.670.921.024.92"*1.860.941.953.48" '

Systems^1.27\2A1.261.071.52*1.191.53'0.87

Department^.87

1.021.72'1.471.431.78"1.331.10

Adj. R-Square^.04*.12 " '.13"*.25*".13*".10"*.10"*.09*"

^ Beta Coefficients from regression analysis.

2 F-statistIc, computed by removing the group of dummy variables and comparing the results from the reduced modelto those from the full model.

2 Significance of the regression is indicated by the asterisks on the Adjusted R-Square.

Significance Key: ' = Significant at .05; '* = Significant at .01; " ' = Significant at .001.

removed. The Cronbach's alpha for the two re-maining questions was .61, certainly lower thandesired, but marginally acceptable. The wordingof the two remaining questions is shown in theAppendix. Part C.

Empirical Test of the Model

Specific propositions and analysisapproach

Descriptive statistics are shown in the Appendix,Part E. Figure 4 shows the model from Figure 3.with the specific measures for each constructadded. Our choice of analysis techniques wasbased primarily on the perceived stage of theorydevelopment. Although structural equation mod-eling (as embodied in the LISREL program)would enable testing the entire model simultane-ously, it requires very strong, precise theory(Barclay, et al., forthcoming) with weit-estab-lished measures. This research was still early inthe theory generation phase, and therefore wedecided to use what Forneil (1982; 1984) refersto as first generation analysis techniques—inthis case multiple regression.

P1: Do Task and Technology CharacteristicsPredict TTF?

The arrows leading into the Task-Technology Fitbox in Figure 3 show Proposition 1. Strong sup-

port would require that each of the eight regres-sions of TTF be significant and that in each re-gression at least some measure of task(non-routineness, interdependence, or thegroup of dummy variables for job title) and somemeasure of technology (groups of dummy vari-ables for systems used or department) be a sig-nificant predictor. Table 3 shows the analysisresults.

Testing the significance of job title, system, anddepartment was somewhat involved, since eachof these was operationalized as a group of dummyvariables. We followed the approach suggestedby Neter and Wassennan (1974, p. 274) andused a general linear test for the significance ofeach group of dummy variables in turn. There-fore, each line in Table 3 actually representsfour separate regressions: a full regression withall dummy variables included and three addi-tional regressions, dropping in tum the jobdummy variables, the systems dummy vari-ables, and the department dummy variables.Columns 3, 4. and 5 show the F-tests for thesignificance of groups of dummy variables (jobtitle, systems, and department) obtained bycomparing the full model to each of three re-stricted models. Columns 1 and 2 show the im-pact of non-routine tasks and interdependenttasks, directly obtainable from the full regressionsince these are not groups of dummy variables.

The R̂ values for the full regressions with alldummy variables ranged from .14 to .33. withadjusted R̂ values from .04 to .25. All but one

224 MIS Quarterly/June 1995

Page 13: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

0)ocCOEo*tCD

Q .

j2oQ.

E

oQ .

ET3

ive

0)

o

|Per

coCON"^

0)uc

• DC0)a0)Q"OoVu0)

O

oc

[Tec

h

COoCO

ooCO

Ch

ai

• Da>3(0

> ,CO

3

Part

i

n

1 D

epc

CQ

en

nai

y

TJcCOCO

c

I3s

at

***uurea

11"(5Etn

own

(A

£

£re01S

"o12oo

oo

u

mi _

oc

re ru

1coIA

O)0)

Q:

Imp

• a

>

rce

• ocre

b"uc

ende

a

•a

'S

Per

M/S Quarterly/June 1995 225

Page 14: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technotogy Fit

ofthe regressions (the one predicting the qualityof the relationship with IS) was significant atgreater than the .001 level. At least one taskcharacteristic was significant in six out of eightregressions, and at least one technology char-acteristic was significant in four out of eight. Thisis moderate support for Proposition 1. Below isa more detailed examination ofthe findings.

Effect of Task Characteristics on TTF. Thestrongest effect of task characteristics on TTFwas from non-routine tasks. We found that indi-viduals engaged in more non-routine tasks ratedtheir information systems lower on data quality,data compatibility, data locatability, training/easeof use, and difficulty of getting authorization toaccess data (note the significance and thenegative coefficients in column 1 of Table 3).This is consistent with the idea that because ofthe non-routine nature of their jcbs, these peo-ple are constantly forced to use information sys-tems to address new problems, such as seekingout new data and combining it in unfamiliarways. Thus, they make more demands on sys-tems and are more acutely aware of shortcom-ings. Interdependence of job tasks (column 2 ofTable 3) was observed to influence perceptionsof the compatibility and reliability of systems.

Finally, two factors of TTF are clearly affectedby job level (column 3 of Table 3): compatibilityand ease of getting authorization for access. Ta-bles 4 and 5 show a more detailed analysis ofthe specific impact of the various job titles onthese two factors. Lower and middle-level staffand managers found the data least compatible,while upper-level management found it mostcompatible. This is consistent with the proposi-tion that upper-level management is oftenshielded from the hands-on difficulties of bring-ing together data from multiple sources andsees it only after the difficulties have beenironed out. It is lower and mid-level individualswho must pay with effort and frustration for dataincompatibilities.

Similarly, Table 5 shows that upper-level man-agement found it much easier obtaining authori-zation for access to data. On the other hand,administrative and clerical staff, with less organ-izational clout, faced red tape in getting permis-sion to access the data they need.

Effect of Technology Characteristics on TTF.The two proxies for characteristics of the tech-nology were "systems used" and "department."Together these were significant predictors forfour of the eight factors of TTF. The specificfindings (see columns 4 and 5 of Table 3) havegood face validity, although not all anticipatedinfluences were observed.

For example, department is a significant predic-tor of user evaluations of production timelinessand of training/ease of use. If IS groups focusspecial emphasis on strategically important crpowerful departments, we might expect that dif-ferent levels of training and easier-to-use. moreup-to-date systems would be provided to thosedepartments. To the extent that IS groups haveconsistent standards for production turnaround,interface design, training policies, and so on,there are likely some departments for whomthese standards are more appropriate than forothers. A third area where we expected to seedifferences between departments, but did not,was the relationship with IS. (But see footnote 7below.)

Systems used is a significant predictor of locata-bility and systems reliability. This too conformsto our expectations. We might expect that somesystems are better than others for locatabitity ofdata or for system reliability, and users reflectthat in their ratings. Another area where we ex-pected to see differences between systems, butdid not, was in the quality of the data. It is possi-ble that our proxy measures of technology char-acteristics were too crude to pick up any but thestrangest Influences within this study.'

^ The absence of an effecl of department on relationship with ISancj of system on quality of the data is sufficiently perplexingto suggest doing some secondary exploratory analysis. Sincesome systems may be department specific, there is thepossibility that including dummy variables for both departmentand system in the same regression {47 dummy variables inall) dilutes the impact that either group alone would have. Forthis reason the data were reanalyzed, dropping system fromthe analysis of relationship with IS and dropping departmenttrom the analysis of quality. Under these circumstances wefound both of the expected relationships. Without system inthe analysis, department is a significant (.05) predictor ofrelationship with IS. Without department in the analysis,system is a significant (.05) predictor of quality. This suggeststhat with stronger measures of technology characteristics, thisaspect of the model might have stronger empirical support.

226 MIS Quarterly/June 1995

Page 15: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

Table 4. Effect of Job Titles on UserEvaluations of Data Compatibility*

Administrative/Clerical Staff

Manager/Assistant DirectorDirector/Assistant Superintendent

Supervisor/Assistant Manager

Analyst/TechnicalTrainmaster/Roadmaster

ProfessionalSuperintendentA/P and up

-.33-.27

-.23-.10-.08.00.26.29

Job titles are ordered by impact from more negativeto more positive. The numbers shown are theregression beta coefficients for the dummyvariables reflecting membership in each jobcategory, (Overaii effect is significant at .001.)

In hindsight, it seems reasonable that charac-teristics of the technology would influence somebut not all TTF components. For example, it isunlikely that differences between systems willhave any influence on whether a user has theauthority to access data; it is much more likelythat job level will influence authority. Overall,these results suggest that task and technologycharacteristics do influence user ratings of task-technology fit, giving moderate support forproposition P1.

P2: Does TTF Predict Utilization?

The arrow from Task-Technology FIT to Utiliza-tion in Figure 3 shows Proposition 2. Strongsupport would require a significant regressionand significant positive links between at leastsome of the eight TTF factors and utilization.The results (shown in Table 6) provide little sup-port for the hypothesized relation. Although theregression as a whole and three of the path co-efficients were statistically significant, the ad-justed R̂ was only .02.

In addition, two of the three significant path co-efficients (reliability of systems and relationshipwith IS) had negative path coefficients. Inter-preted within a theoretical framework in whichattitudes (beliefs, affect) determine behavior, thetwo negative links suggest that users who be-lieve that systems are less reliable and who areless positive about the relationship with IS, will

be more likely to use the systems. This contrarybehavior seems implausible.

A more compelling interpretation is that in thiscase the causal effect works in the other direc-tion (through the feedback mechanism shown inFigure 2). For example, perhaps individuals whouse the systems a great deal and are very de-pendent on them will be more frustrated by sys-tem downtime and the performance impacts itengenders. These highly dependent users aremore likely to be stymied in their work bydowned systems and more likely to rate thosesystems as unreliable. Similarly, people who aremore dependent on systems might be morefrustrated with poor relationships with the IS de-partment and might give poorer evaluations ofthat relationship. This is quite different from nu-merous findings showing the link from user atti-tudes (beliefs, affect) to utilization (e.g.. Davis,1989; Hartwick and Barki. 1994; Moore andBenbasat, 1992; Thompson, et al., 1991), but isconsistent with arguments made by Melone(1990) that under certain circumstances utiliza-tion will influence attitudes.

Several possible explanations for iack of sup-port for Proposition 2 should be noted. First, thispaper has conceptualized utilization as depend-ence on information systems, rather than on themore common concept of duration or frequencyof use. Though we have raised some questionsabout the applicability of these other conceptu-alizations in a field study with portfolios of tasks.

Table 5. Effect of Job Titles on UserEvaluations of Ease of Authorization*

Administrative/Clerical StaffAnalyst/TechnicalManager/Assistant Director

Trainmaster/RoadmasterSupervisor/Assistant ManagerDirector/Assistant SuperintendentSuperintendentA/P and up

Professional

-.32-.11.00

.00

.06

.06

.21

.31

' Job titles are ordered by impact from more negativeto more positive. The numbers shown are theregression beta coefficients for the dummyvariables reflecting membership in each jobcategory. (Overall effect is significant at .001.)

MIS Quarterly/June 1995 227

Page 16: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

it might be that this shift in conceptualization isresponsible for the weak link between TTF be-liefs and behavior.Testing this was possible in asecondary analysis since for Company B wehad gathered additional utilization measures forduration and frequency of use. Two additionalregressions were run, one with TTF predictingduration and the other with TTF predicting fre-quency. Though the R̂ increased to .10 for bothnew regressions, in each case the strongest linkby far was between negative beliefs about "sys-tems reliability" or "relationship with IS" andgreater utilization. Thus, it appeared that ourconceptualization of utilization is not responsiblefor the lack of support for Proposition 2.

A more promising explanation is that the directlink between TTF and utilization proposed forFigure 3 may not be justified in general. That is,TTF may not dominate the decision to utilizetechnology. Rather, other influences from atti-tudes and behavior theory such as habit (Roniset al., 1989). social norms (and mandated use),etc. may dominate, at least in these organiza-tions. This would suggest that testing the linkbetween TTF and utilization requires much moredetailed attention to other variables from atti-tudes and behavior research.

A third possibility is that none of the current con-ceptualizations of utilization are well suited forfield settings where many technologies areavailable and individuals face a portfolio oftasks. The resolution to this dilemma will have toawait further research.

P3: Does TTF Predict Performance ImpactBetter Than Utilization Alone?

Finally, the arrows from Task-Technology Fitand Utilization to Performance Impacts showProposition 3. Strong support would require thatboth TTF and Utilization be significant predictorsof Performance Impacts. Again the test sug-gested by Neter and Wasserman (1974. p. 274)was used to explicitly test for the importance ofadding Wie eight TTF factors as a group to a regres-sion predicting performance using utilization.

To get a complete picture, we ran three regres-sions predicting performance impact, usingthree different sets of independent variables:(1) only utilization, (2) only the eight TTF factors,and (3) both the eight TTF factors and utiliza-

tion. The results are shown in Table 7. Utiliza-tion alone explained 4 percent (adjusted R ) ofthe variarice in performance, while TTF aloneexplained 14 percent. Together, TTF and utiliza-tion explained 16 percent of the variance. TheF-test of the improvement in fit from adding theeight TTF factors as a group was significant atthe .001 level.

Table 7 (the full Model 3) shows that quality ofthe data, production timeliness, and relationshipwith IS al! predict higher perceived impact of in-formation systems, beyond what could be pre-dicted by utilization alone.^ Though we need tobe careful about generalizing too freely aboutthe impact of specific factors of TTF from a sam-ple including only two companies (includingmore companies in our sample might bringother factors into sharper focus), the results dostrongly support Proposition 3. It appears thatperformance impacts are a function of both task-technology fit and utilization, not utilizationalone.

ConclusionEven with some caveats, the TPC model repre-sents an important evolution in our thinking fromthe earlier models in Figure 1, which shows howtechnologies add value to individual perform-ance. We found moderately supportive evidencethat user evaluations of TTF are a function ofboth systems characteristics and task charac-teristics, and strong evidence that to predict per-formance both TTF and utilization must beincluded. Evidence of the causal link betweenTTF and utilization was more ambiguous, withthe suggestion that, at least in these companies,utiiization could cause beliefs about TTFthrough feedback from performance outcomes.

" Although an adjusted R̂ of .16 is not high. It is in line wittiresult5 from other field research predicting user perceptionsof performance impacts (for example, Franz and Robey,1986).

' One perplexing finding from Model 2 in Table 7 is thesignificant negative relationship between compatibility andperformance impacts. However, this relationship drops toinsignificance with Model 3 (including utilization), which webelieve to be the correctly specified model. This suggeststhat the negative Model 2 relationship is spurious.

228 MIS Quarteriy/June 1995

Page 17: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

Table 6.TTF FactorRelationshipQualityTimelinessCompatabilityLocatabilityEase/TrainingReliabilityAuthority

Test of the Influence of TTF onBeta Coefficient

-.21*.08.15*-.13.14.16

-.24*.06

Utilizationt-value-2.110.762.08-1.571.041.26

-2.440.73

(Dependence): Regression ResultsSignificance Adjusted R-Square'

.04 .02*

.45

.04

.12

.16^ 1.02.47

' Significance of the regression is indicated by the asterisks on the Adjusted R-Square.

Significance Key: ' Significant at .05; " Significant at .01; *** Significant at .001.

However, the cumulative evidence of previousresearch showing the impact of useftilness(Adams, et al.. 1992; Davis, et al., 1989; Mathie-son, 1991), relative advantage (f^oore and Ben-basat, 1992). and importance {Hartwick andBarki, 1994) on utilization suggests that at leastunder some circumstances a link between TTFand utilization exists.

This new TPC model provides a fundamentalconceptual framework use^l in thinking about a

number of issues in IS research. Several exam-ples are discussed below.

Implication for surrogates of ISsuccess

Since performance impacts from IT are difficultto measure directly, we often resort to surrogatemeasures of IS success. If appropriate surro-gates are to be chosen, accurate models of theway information systems and services deliver

Table 7. Test of the Influence of TTF and Utilization on Performance Impact:Regression Results

TTF FactorModel 1: Utilization OnlyUtilization

Model 2: TTF OnlyRelationshipQualityTimelinessCompatabilityLocatabilityEase/TrainingReliabilityAuthority

Model 3: Utilization and TTFRelationshipQualityTimelinessCompatabiiityLocatabilityEasarrrainingReliabilityAuthorityUtilization

Beta CoefTrcient

.13"*

.11*

.24*"

.12"-.12**.07.12

-.09-.01

,12'. 2 1 " *. 1 1 "

-.08.09.04

-.06-.01.11"*

t-value

5.06

2.063.S22.69

-2.521.251.76

-1.67-0.07

2.083.322.65-1.711.640.54

-1.06-0.074.32

Significance

.0001

.04

.0001

.004

.01

.21

.08

.10

.95

.04

.001

.009

.09

.10

.59

.29

.94

.0001

AdjustedR-Square^

.04"'

.14"*

.16*"

•• Significance ofthe regression is indicated by the asterisks on the Adjusted R-Square.

Significance Key: 'Significant at .05; "Significant at .01; "'Significant at .001.

MIS Quarterly/June 1995 229

Page 18: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Ted)nology Fit

value are needed. The TPC model is useful inre-evaluating possible choices.

Many researchers have suggested that utiliza-tion is an appropriate surrogate when use is vol-untary, and user evaluations are appropriatewhen use is mandatory (e.g., Lucas, 1975;1981). If, as the model suggests, performanceimpacts are a joint function of utilization andTTF, then neither alone is a good surrogate ex-cept under very limited circumstances.

One might argue that either construct would bea good surrogate if the other were assured. Forexample, TTF might be a good sun^ogate if utiii-zation were assured (i.e., mandatory). However,evidence from a recent study (Moore and Ben-basat, 1992) suggests that voluntariness existson a continuum, with most individuals engagingin partly voluntary behavior. If utilization is partiyvoluntary, then TTF alone is an incomplete sur-rogate for IS success. Similarly, though utiliza-tion might be a good surrogate if TTF wereassured, it is rare that we could be sure a priorithat information systems fit user needs and abili-ties exactly.

We might also defend using only one of theconstructs as a surrogate for success if wecould assume that utilization and TTF werehighly correlated. Certainly in the two compa-nies studied in this research, the link betweenTTF and utilization was not strong, If most utili-zation is partly voluntary, and utilization is onlypartly driven by expectations of perfomiance im-pacts, then an appropriate surrogate for per-formance impacts should include measures ofboth TTF and utilization.

Implications for the impact of userinvolvement

By far the majority of research on user involve-ment and IS success has looked at the impactof user involvement on user attitudes, and ulti-mately on user commitment to utilize the sys-tem. Though this effect is not unimportant, theTPC model also directs our attention to anotheraspect of successful system implementation.When users understanding the business taskare involved in systems design, it is more likelythat the resulting system will fit the task need.Thus, user involvement potentially affects not

only user commitment, but also (and in a com-pletely different way) the quality or fit of the re-sulting system.

Implications for designingdiagnostics for IS problems

As the TPC model becomes more solidly sup-ported, and the critical role of TTF in deliveringperformance impacts is clarified, it suggests thatTTF is an excellent focus for developing a diag-nostic tool for IS systems and services in a par-ticular company. To be most useful, such adiagnostic must go beyond general constructs(such as user satisfaction, usefulness, or rela-tive advantage) to more detailed constructs(such as data quality, locatability, systems reli-ability, etc.) that can more specificaily identifygaps between systems capabilities and userneeds. Based on an understanding of specificgaps, managers may decide to: (1) discontinueor redesign systems or policies, (2) embark ontraining or selection programs to increase theability of users, or (3) redesign tasks to take bet-ter advantage of IT potentiai (Goodhue, 1988).Beyond supporting the importance of the TTFconstruct, this research has pushed forward theeffort to identify and measure distinct compo-nents of task-technology fit. Thus, it is an impor-tant step toward providing a meaningfuldiagnostic tool for practice.

Implications for future research

Construct measurement continues to be a keyconcern in this research domain. Although wehave added to the base of knowledge concern-ing the measurement of TTF components (com-plementing the work of Goodhue (1993)) thereis still ample room for improvement. The TTFmeasure now focuses on IT support for the usertasks of decision making, changing businessprocesses, and executing routine transactions.Refining the existing TTF dimensions, or ex-panding to focus on more user tasks are bothpotential areas for improvement. In addition, ourmeasures of the characteristics of informationsystems and services were admittedly crude.It would seem appropriate to explore the de-velopment of some standard set of measurable

230 MIS Quarterty/June 1995

Page 19: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

dimensions for use in comparing the infomiationtechnology base across companies. Simiiarly, itwouid be important to continue work on the is-sue of defining and measuring utilization to ob-tain a better understanding of the role of thisconstruct. It is aiso important to go beyond per-ceived performance impacts, perhaps by con-structing a laboratory environment in which themodel can be tested with objective measures ofperformance.

A second avenue for future research is to ex-pand the scope of testing across more diversesettings. Testing across a wider scope of convpanies wouid give a better sense of the relativeimportance of various components of TTF.Clearly there is a dilemma here since usingmore diverse settings wouid tend to dilute theimpact of particular effects, but give greater clar-ity to effects tiiat are more generally present. Anadditional opportunity is to explicitly examinefeedback in the modei. For example, an interest-ing area for investigation wouid be the effect ofperformance impacts on utiiization, either di-rectly or indirectly through changes in user rat-ings of TTF and perceived consequences ofuse.

Modeis are ways to structure what we knowabout reality, to darify understandings, and tocommunicate those understandings to others.Once articulated and shared, a model can guidethinking in productive ways, but it can aiso con-strain our thinking into channels consistent withthe model, blocking us from seeing part of whatis happening in the domain we have modeled.We believe the TPC model is a usefui evolutionof the models in which IT leads to performanceimpacts, it should provide a better basis for un-derstanding these critical constructs and for un-derstanding how they link to other related ISresearch issues.

ReferencesAdams. D.A.. Nelson, R.R., and Todd, P.A.

•perceived Usefulness, Ease of Use. andUsage of Information Technology: A Replica-tion," MIS Quarterly (16:2), June 1992. pp.227-248.

Bagozzi. R.P. "A Field Investigation of CausalRelations Among Cognitions, Affect, inten-tions and Behavior," Joumal of Marketing

Research (19), November 1982. pp. 562-584.

Bailey. J.E. and Pearson. S.W. "Development ofa Tool Measuring and Analyzing ComputerUser Satisfaction," Management Science(29:5), May 1983, pp. 530-544.

Barclay, D., Higgins, C.A., and Thompson, R.L."The Partial Least Squares (PLS) Approachto Causal Modeling: Personal Computer Useas an iilustration," Technology Studies: Spe-cial Issue on Research Methodology, forth-coming. 1995.

Baroudi, J.J., Olson. M.H.. and Ives, D. "An Em-pirical Study of the Impact of User involve-ment on System Usage and InformationSatisfaction," Communications of the ACM(29:3), March 1986.

Benbasat, i.. Dexter, A.S., and Todd, P. "An Ex-perimental Program Investigating Color-En-hanced and Graphical information Presen-tation: An Integration of the Findings." Com-munications of the ACM (29:11). November1986. pp. 1094-1105.

Cheney, P.H., Mann, R.I,, and Amoroso, D.L."Organizationai Factors Affecting the Suc-cess of End-User Computing," Joumal ofManagement Information Systems (3:1),1986, pp. 65-80.

Cooper, R. and Zmud. R. "Infomiation Technoi-ogy implementation Research: A Technologi-cai Diffusion Approach." ManagementScience {36:2), Febaiary 1990, pp. 123-139.

Cronan, T.P. and Douglas. D.E. "End-userTraining and Computing Effectiveness inPublic Agencies." Joumat of Management In-formation Systems (6:4), Spring 1990.

Culnan. MJ. "Environmental Scanning: The Ef-fects of Task Complexity and Source Acces-sibility on Information Gathering Behavior,"Decision Sciences (14:2). April 1983, pp.194-206.

Daft, R.L. and f\/ladntosh. N.B. "A Tentative Ex-ploration into the Amount and Equivocality ofInformation Processing in OrganizationalWork Units," Administrative Science Quar-terly (26), 1981, pp. 207-224.

Davis, F.D. "Perceived Usefulness, PerceivedEase of Use, and User Acceptance of Infor-mation Technoiogy," MIS Quarterly (13:3),September 1989, pp. 319-342.

Davis, F.D., Bagozzi, R.P.. and Warsaw. P.R."User Acceptance of Computer Technology:

MIS Quarteriy/June 1995 231

Page 20: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology FH

A Comparison of Two Theoreticai Modeis,"Management Science (35:8), August 1989,pp. 983-1003.

DeLone. W.H. and McLean. E.R. "InformationSystems Success: The Guest for the De-pendent Variable," Infonvation Systems Re-search (3:1), March 1992. pp. 60-95.

Dickson. G.W.. DeSanctis. G.. and McBride,D.J. "Understanding the Effectiveness ofComputer Graphics for Decision Support: ACumulative Experimental Approach," Com-munications of the ACM (29:1), January1986. pp. 40-47.

Doll, W.J. and Torkzadeh. G, "The Measure-ment of End-User Computing Satisfaction:Theoreticai and Methodological Issues." MISQuarterly (15:1). March 1991, pp. 5-12.

Fishbein. M. and Ajzen, i. Belief, Attitude. Inten-tions and Behavior: An Introduction to Theoryand Research, Addison-Wesley. Boston.1975.

Floyd, S.W. "A Causal Modei of ManageriaiElectronic Workstation Use," unpublisheddoctoral dissertation, University of Coloradoat Bouider, Boulder, CO. 1986.

Floyd, S.W. "A Micro Level Model of InformationTechnoiogy Use by Managers." in Studies inTechnological Innovation and Human Re-sources (Vol. 1) Managing Technological De-velopment, U.E. Gattiker (ed), Walter deGnjyter, Berlin & New York, 1988, pp. 123-142.

Franz. CR. and Robey, D. "Organizational Con-text, User Involvement, and the Usefulnessof Information Systems," Decision Sciences(17:3), Summer 1986, pp. 329-356.

Fry, L.W. and Slocum, J.W. "Technology, Struc-ture, and Workgroup Effectiveness: A Test ofa Contingency Model," Academy of Manage-ment Joumal (27:2) 1984, pp. 221-246.

Forneli, C. (ed.). A Second Generation of Multi-variate Analysis, Vol. 1. Methods, Praeger,NewYork, 1982.

Forneli, C. "A Second Generation of MultivariateAnaiysis: Classification of Methods and Impli-cations for Marketing Research," unpub-lished working paper. Graduate Schooi ofBusiness Administration, the University ofMichigan, Ann Arbor, Ml, 1984.

Goodhue, D.L. "IS Attitudes: Toward Theoreticaiand Definition Ciarity," DataBase (19:3/4),Fall/Winter1988, pp. 6-15.

Goodhue, D.L. "User Evaluations of MIS Suc-cess: What Are We Realiy Measuring?" Pro-ceedings of the Hawaii IntemationalConference on Systems Sciences, Vol. 4,Kauai, Hawaii, January 1992. pp. 303-314.

Goodhue, D.L. "Understanding the Linkage Be-tween User Evaluations of Systems and theUnderlying Systems," working paper, MISResearch Center, University of Minnesota,Minneapolis. MN, 1993.

Goodhue, D.L. "Understanding User Evalu-ations of Information Systems." ManagementScience, forthcoming.

Hartwick, J. and Barki, H. "Explaining the Roieof User Participation in Information SystemUse," Management Science (40:4). April1994, pp. 440-465.

ives, B., Olson. M.H., and Baroudi, J.J. 'TheMeasurement of User Information Satisfac-tion," Communication of the ACM. Vol. 26,No. 10, October 1983, pp. 785-793.

Jarvenpaa. S.L. "The Effect of Task Demandsand Graphical Format on Information Proc-essing Strategies," Management Science(35:3), March 1989, pp. 285-303.

Lucas, H. "Performance and the Use of an Infor-mation System," Management Science(21:8), April 1975, pp. 908-919.

Lucas. H. The Analysis. Design, and Implemen-tation of Information Systems. McGraw-Hill,NewYork. 1981.

Mathieson. K. "Predicting User intentions: Com-paring the Technology Acceptance Modelwith the Theory of Planned Behavior," Infor-mation Systems Research (2:3). September1991, pp. 173-191.

Melone, N.P. "A Theoreticai Assessment of theUser-Satisfaction Construct in InformationSystem Research," Management Science(36:1), January 1990. pp. 76-91.

Moore, G.C. and Benbasat. I. "An Empirical Ex-amination of a Modei of the Factors AffectingUtiiization of Information Technology by EndUsers." working paper. University of BritishColumbia, Vancouver, B.C.. 1992.

Neter, J. and Wasserman, W. Applied LinearStatistical Models. Richard D. Irwin, Inc..Homewood. IL. 1974.

Olson. M.H. and Ives, B. "Chargeback Systemsand User Invoivement in Systems — An Em-piricai Investigation," MIS Quarterly (6:2),1982, pp. 47-60.

232 MIS Quarterly/June 1995

Page 21: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

O'Reilly. C.A. "Variations in Decision Makers'Use of Information Sources: The impact ofQuality and Accessibility of information."Academy of Management Joumal (25:4).1982. pp. 756-771.

Pedhazur, E. Multiple Regression in BehavioralResearch (2nd ed.). Holt. Rinehart and Win-ston, NewYork, 1982.

Pentiand. B.T. "Use and Productivity in PersonalComputers: An Empirical Test." Proceedingsof the Tenth Intemational Conference on In-formation Systems, Boston, MA. December1989. pp. 211-222.

Perrow, C. "A Framework for the ComparativeAnalysis of Organizations," American Socia-iogical Review {32:2). 1967. pp. 194-208.

Petingell, K.. Marshall, T., and Remington. W."A Review of the Influence of User Involve-ment on System Success," Proceedings ofthe Ninth Intemational Conference on Infor-mation Systems, Minneapoiis. MN. Decem-ber 1988, pp. 227-236.

Robey. D. "User Attitudes and Management in-formation System Use," Academy of Man-agement Joumal {22:3). 1979, pp. 527-538.

Ronis, D.L. Yates, J.F. and Kirscht. J.P. "Atti-tudes, Decisions, and Habits as Detenni-nants of Repeated Behavior." in Attitude andStructure and Function, A.R. Pratkanis, S.Breckier. and A.G, Greenwald (eds.),Lawrence Eribaum Associates, Hiilsdale, NJ,1989.

Straub, D.W. and Trower, J.K. "The Importanceof User Involvement in SuccessfijI Systems:A Meta-Analytical Reappraisal," MISRC-WP-89-01, Management information SystemsResearch Center, University of Minnesota.Minneapoiis, MN, 1989.

Swanson. E.B. "Management Information Sys-tems: Appreciation and Involvement." Man-agement Science (21:2), October 1974. pp.178-188.

Swanson. E.B. "Measuring User Attitudes inMIS Research: A Review," Omega (10:2),1982, pp. 157-165.

Swanson, E.B. "information Channel Dispositionand Use." Decision Sciences (18:1). Winter1987, pp. 131-145.

Thompson. J.D. Organizations in Action.McGraw-Hili, NewYork. 1967.

Thompson. R.L.. Higgins. C.A.. and Howeii, J.M."Towards a Conceptual Model of Utilization,"

MiS Quarterly (15:1), March 1991, pp. 125-143.

Thompson. R.L., Higgins. C.A., and Howell. J.M."influence of Experience on Personal Com-puter Utilization: Testing a ConceptualModel," Joumal of Management InformationSystems (11:1), 1994, pp, 167-187.

Tornatzky, LG. and Klein, K.J. "InnovationCharacteristics and innovation Adoption-Im-plementation: A Meta-Analysis of Findings,"IEEE Transactions on Engineering Manage-ment {2^:\), Febmary 1982, pp. 28-45.

Triandis, H.C. "Values. Attitudes and Interper-sonal Behavior," in Nebraska Symposium onMotivation. 1979: Beliefs, Attitudes and Val-ues. H.E. Howe (ed.). University of NebraskaPress. Lincoln, NE. 1980, pp. 195-259.

Trice. A.W. and Treacy, M.E. "Utilization as aDependent Variable in MIS Research," DataBase (19:3/4), FailA^finter 1988.

Vessey. I. "Cognitive Fit: A Theory-BasedAnalysis of the Graphs Vs. Tables Litera-ture," Decision Sciences (22:2), Spring 1991,pp. 219-240.

About the AuthorsDale L. Goodhue is an assistant professor ofMIS at the University of Minnesota's CarlsonSchool of Management. He received his Ph.D.in MIS from MIT, and has published in MISQuarteriy. Data Base, Information & Manage-ment, and (soon) Management Science. His re-search interests include measuring the impactof information systems, impact of task-technoi-ogy fit on performance, and the management ofdata and other IS infrastructures/resources.

Ronald L.Thompson is an associate professorwith the School of Business Administration, Uni-versity of Vermont. He holds a Ph.D. from theUniversity of Western Ontario (Canada), andgained experience in ranching and banking priorto entering academe. His articles have ap-peared in journals such as MIS Quarterly. Jour-nal of Management Information Systems,Information & Management, and the Joumal ofCreative Behaviour Ron's current research in-terests focus on factors influencing the adoptionand use of information technology by individuais,as well as the relation between IT use and indi-

MIS Quarterly/June 1995 233

Page 22: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

vidual performance, His book. Information W. Cats-Baril), is scheduled for release by InwinTechnology and Management (co-authored with Publishing in 1996.

Appendix

Construct Measures and Descriptive StatisticsIn each company, the basic research questionnaire was customized by inserting precise acronyms andterms so that names of systems and departments wouid be readily identifiable by the respondents.

PART A. TASK-TECHNOLOGY FIT MEASURES8 Final Factors of TTF

21 Original Dimensions of TTFQuestions

QualityCURRENCY: (Data that i use or would like to use is current enough to meet my needs.)

CURR1 — i can't get data that is current enough to meet my business needs.CURR2 — The data is up to date enough for my purposes.

RIGHT DATA: (Maintaining the necessary fields or elements of data.)

RDAT1 — The data maintained by the corporation or division is pretty mudh what I noe^i to carry outmy tasks.

RDAT2 — The computer systems available to me are missing critical data that would be very usefultome in my job.

RIGHT LEVEL OF DETAIL: (Maintaining the data at the right level or levels of detail.)RLEV1 — The company maintains data at an appropriate level of detail for my group's tasks.RLEV2 — Sufficiently detailed data is maintained by the corporation,

LocatabilityLOCATABILITY: (Ease of determining what data is available and where.)

L0CT1 — It is easy to find out what data the corporation maintains on a given subject.L0CT3 — It is easy to locate corporate or divisional data on a particular issue, even if I haven't used

that data before.MEANING: (Ease of determining what a data element on a report or file means, or what is excluded

or included in calculating it.)MEAN1 — The exact definition of data fields relating to my tasks is easy to find out.MEAN2 — On the reports or systems I deal with, the exact meaning ofthe data elements is either

obvious, or easy to find out.Authorization

AUTHORIZATION. (Obtaining authorization to access data necessary to do my job.)AUTH1 — Data that would be useful to me is unavailable because I don't have the right authorization.AUTH2 — Getting authorization to access data that would be useful in my job is time consuming

and difficult.Compatibility

COMPATIBILITY: (Data from different sources can be consolidated or compared without inconsistencies.)C0MP1 —There are times when I find that supposedly equivalent data from two different sources

is inconsistent.C0MP2 —Sometimes it is difficult for me to compare or consolidate data from two different sources

because the data is defined differently.

234 MIS Quarterly/June 1995

Page 23: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task-Technology Fit

C0MP3 — When rt's necessary to compare or consolidate data from different sources, I find thatthere may be unexpected or difficult inconsistencies.

Production TimelinessTIMELINESS: (IS meets pre-defined production turnaround schedules)

PRODI — IS, to my knowledge, meets its production schedules such as report delivery and runningscheduled jobs.

PR0D2 — Regular IS activities (such as printed report delivery or njnning scheduled jobs) are

completed on time.

Systems Reliabil i tySYSTEMS RELIABILITY: (Dependability and consistency of access and uptime of systems.)

RELY1 — I can count on the system to be "up" and available when I need it.RELY2 — The computer systems I use are subject to unexpected or inconvenient down times which

makes it harder to do my work.

RELY3 — The computer systems I use are subject to frequent problems and crashes.

Ease of Use / TrainingEASE OF USE OF HARDWARE & SOFTWARE: (Ease of doing what I want to do using the system hardware

and software for submitting, accessing, analyzing data.EASE1 — It is easy to learn how to use the computer systems I need.EASE2 — The computer systems I use are convenient and easy to use.

TRAINING: (Can I get the kind of quality computer-related training when I need it?)TRNG1 — There is not enough training for me or my staff on how to find, understand, access or use

the company computer systems.TRNG2 — I am getting the training I need to be able to use company computer systems, languages,

procedures and data effectively.Relationship with Users

IS UNDERSTANDING OF BUSINESS: (How well does IS understand my unit's business mission and itsrelation to corporate objectives?)

UNBS1 — The IS people we deal with understand the day-to-day objectives of my work group andits mission within cur company.

UNBS2 — My work group feels that IS personnel can communicate with us in famiiiar businessterms that are consistent.

IS INTEREST AND DEDICATION: (to supporting customer business needs.)INDN1 — IS takes my business group's business problems seriously.INDN2 — IS takes a real interest in helping me solve my business problems.

RESPONSIVENESS: (Turnaround time for a request submitted fiar IS service.)RESP1 — It often takes too long for IS to communicate with me on my requests.RESP2 — I generally know what happens to my request for IS services or assistance or whether

it is being acted upon.RESP3 — When I make a request for service or assistance, IS nom'ially responds to my request in a

timely manner,CONSULTING: (Availability and quality of technical and business planning assistance for systems)

C0NS1 — Based on my previous experience I would use IS technical and business pianningconsulting services in the future if I had a need.

CONS2 — I am satisfied with the level of technical and business planning consulting expertise I receivefrom IS.

IS PERFORMANCE: (How well does IS keep its agreements?)PERF2 — IS delivers agreed-upon solutions to support my business needs.

PART B. TASK/JOB CHARACTERISTICS MEASURESTASK EQUIVOCALITY

ADHC1 — I frequently deal with ill-defined business probiems.ADHC2 — I frequently deal with ad-hoc, non-routine business problems.

MIS Quarterty/June 1995 235

Page 24: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information

Task- Technology Fit

ADHC3 — Frequently the business problems I work on involve answering questions that have neverbeen asked in quite that form before.

TASK INTERDEPENDENCEINTR1 — The business problems I deal with frequently involve more than one business function.INTR2 — The problems I deal with frequently involve more than one business function.

PART C. INDIVIDUAL PERFORMANCE IMPACT MEASURESPERFORMANCE IMPACT OF COMPUTER SYSTEMS:

IMPT1 The company computer environment has a large, positive impact on my effectiveness andproductivity in my job.

IMPT3 IS computer systems and services are an important and valuable aid to me in theperfomiance of my job.

PART D. DIMENSIONS AND QUESTIONS OROPPED AS NOT SUCCESSFULLY MEASUREDCONFUSiON. (Difficulty in understanding which systems or files to use in a given situation.) 2 quesUons.ACCESSIBILITY. (Access to desired data.) 3 questions.ACCURACY. (Correctness ofthe data.) 2 questions.IS POLICIES. STANDARDS & PROCEDURES. (Impact of policies, Standards & procedures on job.) 2 questions.ASSISTANCE. (Ease of getting help with problems related to computer systems and data.) 3 questions.

Individual Questions Dropped: Locatability (1), tS perfomiance meets goals (1), Perfomiance impact (1).

PART E. DESCRIPTIVE STATISTICS

Vanable

Relationship with UsersQualityProduction TimelinessCompatibilityLocatabilityEase of Use/TrainingSystems ReliabilityAuthorizationNon-Routine TasksTask interdependencePerformance ImpactTotal Dependence

N

598605561591600609604588581578608559

Mean

4.4469774.6298354.8511593.6813313.7023614.1182274.3115344.1564634.2022384.6643605,3552634.044723

Std Dev

0.9564911.0753901.0867071.1179801.0690701.1848341.2964191.3169741.3138221.3411791.2038952.176727

Minimum

1.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.0000001.000000

0

Maximum

7.0000007.0000007.0000007.0000007.0000007.0000007.0000007.0000007.0000007.0000007.000000

10

236 MIS Quarterly/June 1995

Page 25: Task-Technology Task-technology fit, individual per- …Task-Technology Fit and Individual Performance Keywords: Task-technology fit, individual per-formance, impact of information