Quality End User-Developed Applications : Some Essential...

11
Quality End User-Developed Applications : Some Essential Ingredient s Abstrac t This article examines the dimension of quality in end user - developed applications . Quality is defined as the degree t o which an application of "high grade" accomplishes or attain s its goal from the perspective of the user . We propose a definition for "quality" as a combination of end use r information satisfaction and application utilization . We the n discuss three measurement instruments that were developed t o capture the dimensions of quality and assess thei r psychometric properties . ACM Categories : D .2 .9, H .4 .0,1 .2 .1, J .1, K .6 . 2 Keywords : End user computing, quality, applicatio n effectiveness by Donald L . Amoroso, University of Colorado, Colorado Spring s and Paul H . Cheney, University of South Florid a organization-wide use . End users are typically non-dat a processing professionals . The benefits of EUC are well documented . These benefit s accrue not only to the end users themselves ; they also accrue t o the information systems group and to the organization . Tabl e 1 depicts several of the benefits of EUC that have been cite d recently in the literature . While reducing development lea d time, the major advantages of EUC are that the problem s associated with eliciting information requirements are shifte d to the insiders and at the same time ownership is immediatel y transferred to the users . Having users develop their ow n applications eliminates the problems associated with ineffec- tive communications between the systems analyst and the en d users and results in systems that are used . THE ISSU E We are well into the second decade of end user computin g (EUC) activity within most companies . The prediction o f Robert Benjamin (1982) that by 1990 EUC will absorb a s much as 90% of the total computing resources in organization s appears to be coming true . This growth has been fostered b y the proliferation of microcomputers, the availability o f powerful, user-friendly software, and by the move towar d distributed information processing . EUC continues to be a n important issue for managers of tomorrow's organizations . The increase in EUC literature provides evidence of this trend . The purpose of this study is to examine the quality dimen- sion of end user-developed applications . We will first define the quality construct and suggest one approach for how t o measure it . We will draw on some recent research to identif y factors that contribute to the quality of end user-develope d applications . Our approach posits that the quality dimension i s very closely related to two other factors : the success and the effectiveness of a specific application . The Random House Dictionary (1972) defines fou r important terms for this research as follows : Success—The favorable or prosperous termination of attempt s or endeavors. Effectiveness—Adequate to accomplish a purpose ; producin g the intended or expected result . Quality—High grade; superiority ; excellence . An accomplish- ment or attainment . Quality Control—A system for verifying and maintaining a desired level of quality in a product or process by carefu l planning, use of proper equipment, continued inspection, an d corrective action as required . End user computing occurs when a computer user relies o n information technology to develop specialized applications fo r their own personal use, departmental use, or in some cases To End User s Increased decision making effectivenes s • Improved user computer literac y Increased satisfaction with end user-developed application s Faster response to information request s Improved relationships with the IS staf f To IS Staff and IS Departmen t A reduction in the backlog of IS application developmen t projects • A decrease in the proportion of IS resources spen t on application maintenance and programmin g • Improved programmer job satisfactio n • Better use of limited resources • Improved user relation s To Managemen t • Fewer user/IS conflict s • More satisfied end users and IS staf f Direct control over departmental information, applications , and the environment in genera l • Increased end user productivit y Table 1 . The Benefits of EUC in Organization s Eliminating the systems analyst's role as the central figur e in the analysis and design process, however, carries with it a number of dangers . Davis (1982) and Amoroso, McFadden an d White (1990), discussed several dangers inherent in th e development of end user applications . First, the elimination o f the separate user and analyst roles may result in the neglect o f training, documentation, or maintenance of user-develope d applications . Second, the user may not have the ability t o correctly identify complete information requirements . Moreover, Alavi and Weiss (1986) reported that end users ma y inadvertently apply the wrong analysis technique to a situatio n or, in some cases, even attempt to solve the wrong problem . DATA BASE Winter -92 1

Transcript of Quality End User-Developed Applications : Some Essential...

Page 1: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

Quality End User-Developed Applications :Some Essential Ingredients

AbstractThis article examines the dimension of quality in end user -

developed applications . Quality is defined as the degree t owhich an application of "high grade" accomplishes or attain sits goal from the perspective of the user . We propose adefinition for "quality" as a combination of end use rinformation satisfaction and application utilization . We the ndiscuss three measurement instruments that were developed tocapture the dimensions of quality and assess thei rpsychometric properties .ACM Categories : D .2 .9, H .4 .0,1 .2 .1, J .1, K.6 . 2Keywords : End user computing, quality, applicationeffectiveness

by Donald L . Amoroso, University of Colorado, Colorado Springsand Paul H. Cheney, University of South Florida

organization-wide use. End users are typically non-dat aprocessing professionals .

The benefits of EUC are well documented . These benefit saccrue not only to the end users themselves ; they also accrue t othe information systems group and to the organization . Table1 depicts several of the benefits of EUC that have been cite drecently in the literature . While reducing development leadtime, the major advantages of EUC are that the problem sassociated with eliciting information requirements are shifte dto the insiders and at the same time ownership is immediatel ytransferred to the users . Having users develop their ow napplications eliminates the problems associated with ineffec-tive communications between the systems analyst and the en dusers and results in systems that are used .

THE ISSUE

We are well into the second decade of end user computin g(EUC) activity within most companies . The prediction o fRobert Benjamin (1982) that by 1990 EUC will absorb a smuch as 90% of the total computing resources in organization sappears to be coming true . This growth has been fostered bythe proliferation of microcomputers, the availability ofpowerful, user-friendly software, and by the move towar ddistributed information processing . EUC continues to be animportant issue for managers of tomorrow's organizations . Theincrease in EUC literature provides evidence of this trend .

The purpose of this study is to examine the quality dimen-sion of end user-developed applications . We will first definethe quality construct and suggest one approach for how t omeasure it . We will draw on some recent research to identif yfactors that contribute to the quality of end user-develope dapplications . Our approach posits that the quality dimension i svery closely related to two other factors : the success and theeffectiveness of a specific application .

The Random House Dictionary (1972) defines fou rimportant terms for this research as follows :Success—The favorable or prosperous termination of attempt sor endeavors.Effectiveness—Adequate to accomplish a purpose ; producingthe intended or expected result .Quality—High grade; superiority ; excellence . An accomplish-ment or attainment .Quality Control—A system for verifying and maintaining adesired level of quality in a product or process by carefu lplanning, use of proper equipment, continued inspection, an dcorrective action as required .

End user computing occurs when a computer user relies o ninformation technology to develop specialized applications fo rtheir own personal use, departmental use, or in some cases

To End Users• Increased decision making effectivenes s• Improved user computer literacy• Increased satisfaction with end user-developed application s• Faster response to information request s• Improved relationships with the IS staf f

To IS Staff and IS Department• A reduction in the backlog of IS application developmen t

projects• A decrease in the proportion of IS resources spen ton application maintenance and programmin g

• Improved programmer job satisfactio n• Better use of limited resources• Improved user relation s

To Managemen t• Fewer user/IS conflicts• More satisfied end users and IS staf f• Direct control over departmental information, applications ,

and the environment in genera l• Increased end user productivity

Table 1. The Benefits of EUC in Organizations

Eliminating the systems analyst's role as the central figurein the analysis and design process, however, carries with it anumber of dangers . Davis (1982) and Amoroso, McFadden an dWhite (1990), discussed several dangers inherent in th edevelopment of end user applications. First, the elimination o fthe separate user and analyst roles may result in the neglect o ftraining, documentation, or maintenance of user-develope dapplications. Second, the user may not have the ability t ocorrectly identify complete information requirements .Moreover, Alavi and Weiss (1986) reported that end users ma yinadvertently apply the wrong analysis technique to a situatio nor, in some cases, even attempt to solve the wrong problem .

DATA BASE

Winter

-92 1

Page 2: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

Many researchers have noted frequent errors in end user -developed applications, specifically : 1) mistakes in logic, 2 )unreliable output, 3) unauditable applications, 4) inability t ochange or modify applications, and 5) an overall lack o fcomprehensibility . One of the most serious risks is embodie din the nature of the developers : EUC developers generally lac kcomputer training, especially in the area of systems develop-ment techniques . Rockart and Flannery (1983) and Quillard ,Rockart, Wilde, Vernon, and Mock (1983) independentlyreported that 60% of end user-developers are nontechnica lpersonnel, using the computer primarily as a tool to solve aproblem or perform a task . Since those reports, the penetrationof end user computing into the organization has increased sub-stantially and so has the potential for poor quality applications .

Despite these risks, however, user-developed application shave become the norm, accounting for 50-80% of the infor-mation technology budgets in many firms (Amoroso, 1990 ;Brancheau and Wetherbe, 1990) . Clearly, end users believethat the benefits exceed the risks otherwise they would no tengage in application development activities . The objective o fpractitioners and researchers alike is to design and develo pprocedures that retain the benefits and reduce the risks of EUC .In other words, quality control should be of major importanc ein the development of end user applications .

We contend that quality control can be addressed b ydeveloping and using instruments which are reliable and teste dfor validity . Our premise is that there may be user and organi-zational variables that profoundly impact the quality of en duser-developed applications . Those variables, when properl ycontrolled, may lead to increased benefits and lower risks .

There is a scarcity of literature that empirically examine sthe quality issue of end user applications within the corporat eenvironment . Goodhue (1985) and Rivard and Huff (1985 )stated that effectiveness and success are essentially the sam econstruct when applied to end user computing . In a previou sarticle (1991), the authors proposed and tested a causal mode lwhich includes end user information satisfaction and applica-tion utilization as surrogates of application effectiveness . Thesehave traditionally been used independently as measures o finformation systems success and, in some cases, effectiveness .

This article explores the quality construct as a combinatio nof information satisfaction and applications usage . In thi scontext we define quality as the degree to which an applicatio nof high grade accomplishes or attains its goal from the perspec-tive of the end user . We believe that end user informatio nsatisfaction addresses the high grade issue, while applicatio nutilization deals with the attainment of goals . Research ha sshown that superiority and high grade issues, with respect toend user-developed applications, are mainly perceptual i nnature (Doll and Torkzadeh, 1991 ; Glorfield, Luster an dCronan, 1991) . The research stream in end user informatio nsatisfaction is useful in understanding this aspect of applicatio nquality . Measuring the usage of an application provide sevidence of its ability to accomplish a goal .

This article will discuss the EUC environment in order togain a perspective on those who will evaluate the quality o fend user-developed applications . We will then conside rprevious research in the development of information satis-faction and application utilization instruments .

THE NATURE OF END USER S

Early in the EUC movement, Rockart and Flannery (1983 )took a very broad view of end user computing when they intro-duced six distinct classes . These end users differed signifi-cantly in terms of computer skills, method of computer use ,application focus, and the amount of support needed andobtained .• Nonprogramming end users access computerized dat athrough a limited, menu-driven application program, usuall yprovided by other s• Command-level end users are able to specify, access, an dmanipulate data in order to generate unique reports .• End user programmers utilize both command and proce-dural languages directly for their own personal informationneeds .• Functional support personnel support other end users an dthemselves in the development of applications .• End user computing support personnel and D Pprogrammers, fluent in end user languages, aid other en dusers in the development process .

Three additional studies utilized the Rockart and Flanner ytaxonomy as a means of classifying respondents . Quillard, e tal ., (1983) added levels of programming and technical under-standing to their list . In a Brancheau and Wetherbe (1985) fiel dstudy, end users were self-classified based upon the above use rdescriptions . Experience was regarded as an important use rcharacteristic . Sumner and Klepper (1987) investigated use rapplications with end users in the command level, end use rprogrammer, and functional support categories . The primaryuser characteristics they examined were the degree of use rinvolvement in application development, training, an ddevelopment of end users, and the nature of application sdeveloped .

In five studies conducted in the 1980s, less than 10% of th eend users surveyed fell into the nonprogramming end use rcategory . The Rockart and Flannery and Sumner and Kleppe rstudies indicated that a large group of end users were found i nthe functional support category . Quillard, et al ., reported onl y20% in the functional support group, while 71% were in th ecommand level and end user programmer categories .

Both Quillard, et al ., (1983) and Amoroso and Cheney(1987, 1991) suggest that the growth of EUC would com efrom the command level end user and end user programme rcategories . They found that 71% of the end users fell into thes etwo categories . Davis (1990) identified the autonomous use r(mapping to the command level end user and the end use rprogrammer categories) as the fastest growing group of en dusers in the 1990s . Perhaps the Sumner and Klepper data wasskewed toward the functional support group, as they wer einvestigating information systems strategy in their research .One cannot, however, deny the growth that will take place i nthe functional support group over the next decade .

It appears from these studies that the nonprogramming en duser does not reflect a high growth category of end user . Enduser categories strongly reflect the development dimension .Each of the studies reported a random selection of end users i nthe organizations chosen, so one might argue that the studie srepresented in Table 2 show evidence of end users developin g

2

Winter

-92 DATA BASE

Page 3: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

Table 2. Composite of End User Studies

Rockart & Quillard Braneheau & Sumner & Amoroso & %~es~ff End_Users Elatta.e,r..y_ et al. ___~.¢,l l l .e,r . l lf ,_ ~ ._C..ll.edl.e,y__ Nonprogramming end user 9% 1% 4% 0% 3% Commatad level end user 22% 35% 26% 26% 41% End user programmers 30% 36% 41% 13% 34% Functional support 53 % 20% 29 % 61% 16% EUC support 7% 8% 0% 0% 4% DP programmers 15% 0% 0% 0% 0% Reported Sample Size 140 83 53 31 260

app l ica t ions , ra ther than s imply using p redeve loped applications.

T H E Q U A L I T Y C O N S T R U C T S

Using information satisfaction and more recently end user computing satisfaction as a sun'ogate measure of success has been a continuing subject of discussion atld disagreement in the MIS literature. To a lesser extent, system usage, or in some cases information utilization, as a measure of the success of a given application system has also been hotly debated. One additional issue concerns the relationship between satisfaction and usage. One might logically argue that increased satis- faction with a system will lead to increased usage of that system. An equally convincing argument, however, is thai the more a person uses a system, the more they may like it and become comfortable and satisfied with il. We must conclude that lhese constructs should not be considered independently.

END USER INFORMATION SATISFACTION Bailey and Pearson (1983) def ined user information

satisfaction as a multidimensional attitude of the user toward different aspects of an information system. Ives, Olson and Baroudi (1983) and Iivari (1987) described user information satisfaction as the perceived effectiveness of an information system. Treacy (1985) concluded that: "Starting with the factors discovered by Ives, Olson and Ba roud i - - a more precise, unambiguous, and complete causal model of UIS should be developed. This model would result in a diagnostic model of UIS that could have important implications for the management of end user computing."

Doll and Torkzadeh (1988, 1991) argue that end user computing satisfaction (EUCS) is an important theoretical construct because of its potential for helping us discover both forward and backward links in a causal chain. Figure 1 depicts their "'system of value" chain. In their view, end user comput-

ing satisfaction is potentially both a dependent variable (when the focus of the research interest is upstream activities or fac- tors that cause end user computing satisfaction) or an indepen- dent variable (when the domain of one's research interest is downstream behaviors affected by end user satisfaction). Most MIS research thus fat" has been upstream in nature with user information satisfaction used as a measure of systems success to evaluate vatious design at~d implementation activities.

Glorfield, et al., (1991) compared the Ives, Olson and Baroudi (IOB) and Dol l /Torkzadeh instruments of user information satisfaction using both personal computing and mainframe applictions. They found that the two instruments measured different aspects of user information satisfaction with the exception of IOB's information product component. This finding is consistent with our contention that the user information satisfaction construct, particularly lhe information product component, focuses on the superiority and high grade component of quali ty. Therefore , we converged on the iuformation product using a modified Ives, Olson and Baroudi instrument for the end user computing envffonment.

APPLICATION UTILIZATION Utilization has been studied by a munber of researchers in the

past two decades. Why do managers seek so diligently for a good measure of utilization? One reason involves the need to justify expenditures on information technologies which end users continue to demand. Another reason is the rapid introduction of emerging technologies in corporat ions every year. Also, managers have found a lack of consistency in present utilization definitions mad measures. Unlike user inforlnation satisfaction, standard utilization measures are still not present today.

Srinivasan (1985) defined system utilization as a behavioral measure and states, "If the user exhibited increased evidence of system use in situations where use was not mandatory, then he must find the system useful." Ives and Olson (1984) described system utilization as a useful measure of user acceptance valid- ating the belief that usage by end users describes the applica- tion as attaining its development goal.

-,~ upstream

Causal factors Beliefs

EUCS ~,~ downstream ~ - ~ - - - - t -

Attitudes ~ Performance ~ Social and Related Economic Behaviors Impact (e.g., use)

Figure 1. System to Value Chain

DATA BASE Winter 92 3

Page 4: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

Several problems exist with the previous studies . First ,single-item measures of system utilization were employedwhere a multi-dimensional construct was being assessed .Researchers often failed to report and/or account for th evariance in the type of application utilized . Differen tdefinitions of utilization were often employed, depending upo nthe process under study . Another problem was the operation-alization of the variables in a particular category . Many earlystudies did not report instrument validation or reliabilit yscores . Yet another problem was deciding which aspect o fusage to measure . Researchers often forgot to include tas kcharacteristics related to type of use in their variable set .

The importance of examining intended and actual utilizatio nwas suggested by Cheney (1980) in order to make compari-sons . The "intended" dimension captures an end user's attitud etoward application usage rather than focusing solely onbehavioral characteristics . Intuitively, Cheney postulated, on ewould expect to find that intended measures score higher tha nactual measures on a specific characteristic of use . From amanager' s perspective, combining intended with actual utiliza-tion yields some empirical evidence in attempting to justify th epurchase of emerging information technologies based solely o nend user intended usage requests . Actual utilization patterns ,cross-referenced with intended utilization patterns, expos eareas of need and recommend areas of organizational support .

We believe that there may be some correlation between th etwo quality subconstructs of end user information satisfaction

and application utilization . Melone (1990) commented on thefact that performance-related operationalizations woul denhance the value of the system usage construct . These aremeasures that consider the integrated context in which work i sactually accomplished and the information is actively used . Weagree that additional performance-related operationalizationsare needed, but we also recognize that such measures will pro-bably be application-specific, which will make generalizatio ndifficult, if not impossible .

ONE PREVIOUS STUDY O NEFFECTIVENES S

A recent study on end user application effectiveness b yAmoroso and Cheney (1991) helps to explain the factors tha timpact end user information satisfaction and applicatio nutilization . The hypotheses that were tested in this research ar eillustrated in the model in Figure 2 . We will describe each o fthe hypotheses with their justification for inclusion in themodel below :

1 . The larger the perceived application backlog, thegreater should be the level of end user information satisfaction .Cheney, Mann and Amoroso (1986) proposed that th eprobability of end user application success, as measured byuser information satisfaction, should be enhanced when th e

8End User

InformationSatisfactio n

3"Perceive dHelpfulnes s

of EUC Polic y

4Perceived Org .

Support for EU CApplic . Dcv .

5Perceived Qual . ,of EUC Appi .

Dev . Tools

7

Motivation t oDevelo p

Applications

9ApplicationUtilizatio n

4

Figure 2. Model of End User Application Effectivenes sWinter

-92 DATA BASE

Page 5: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

application development backlog is perceived to be large .Martin (1982) stated that users will take development into thei rown hands when faced with an impossible backlog .

2. The degree of previous computer experience andtraining should influence end user satisfaction and applicatio nusage. Rivard and Huff (1985) reported that a user's compute rbackground was a significant variable in explaining why som eviewed a tool as easy to use while others perceived the sam etool as difficult to use . Kasper and Cerveny (1985) concludedthat users with more computer experience developed a signifi-cantly greater number of applications . Yaverbaum (1988 )reported an increase in the internal motivation to use com-puters as the number of years of computing experience grew .Fuerst and Cheney (1982) found a strong relationship betwee nsystem usage and the experience of system users .

Hackathorn (1987) found training and education to b estrongly associated with the general success of the end use rcomputing environment . Guimaraes and Gupta (1987) teste dthe impact of training on a variety of variables related to per-sonal computing and support services, finding several positiv erelationships, including motivation . F'uerst and Cheney (1982 )also reported a significant correlation between user trainin gand utilization .

3. The perceived helpfulness of EUC policies shouldimpact the level of end user information satisfaction . Sufficien tpolicies have not kept pace with the rapid growth of the EU Cfield . Gerrity and Rockart (1986) suggest a set of integrate dpolicies, standards, and guidelines to ensure the highest qualitytechnical environment .

4. Perceived organizational support of EUC applicatio ndevelopment in the form of hardware, software, data pro-cesses, and people, has been cited as a strategy that wil lincrease the likelihood of EUC success (Cheney, et al ., 1986) .Jobber and Watts (1986) and Lucas (1975) concluded that th emore positive the perception of organizational support, th egreater the degree of system utilization .

5. The perceived quality of end user applicationdevelop-ment tools can be found to positively impact both th esatisfaction and usage of end user-developed applications .Many authors have stressed the importance of the quality o fuser friendly tools for successful EUC environments . Rivar dand Huff (1985) paid particular attention to the correlatio nbetween the quality of end-user tools and a user's motivatio nto develop new applications . A relationship exists between th equality of EUC tools and end user satisfaction . We feel tha tthere is a similar relationship that exists with applicatio nutilization .

6. The more positive a user's attitude towards end userdevelopment, the greater the levels of both end user informa-tion satisfaction and application utilization . Robey (1979) con-cluded that as user attitudes toward a new system improve, th elikelihood of system success increases. Goodhue (1988 )reported that users who held realistic expectations or attitude stoward newly implemented systems were more satisfied withthe system and used it more than users whose pre-implemen-tation expectations were unrealistic . Rivard and Huff (1985 )found a correlation between user attitudes and overall use rsatisfaction . Baroudi, Olson and Ives (1986) discovered tha tuser attitudes toward an information system will influencebehavior with respect to use of that system and its outputs .

7. The level of a user's motivation to develo papplications should impact both satisfaction and utilization .Motivation is defined as a person's internal force to behave i na certain way . Zmud (1979) proposed a model for informatio nsystems success by examining the individual differences o fsystem users . Trice and Treacy (1986) presented a model forstructuring utilization research and suggested the inclusion o findividual differences .

The data in that study were analyzed using Partial Leas tSquares (PLS), a multivariate path analysis statistical techni-que developed by Wold (1985) . PLS is known as a secondgeneration causal modeling technique . Each step of the PL Siterative procedure involves the minimization of some residua lvariation with respect to a subset of the parameters . Thefindings from this research indicated that :

1. Motivation to develop applications was positively relatedto application utilization . It had the strongest path coefficien tin the model (coef . = .59) .

2. Perceived quality of EUC development tools wa spositively related to end user information satisfaction (coef . =.18) . It was not as strongly related to usage (coef . = .15) .

3. Perceived organizational support was indirectly related t oimproved end user information satisfaction (coef. = .53) . It alsodirectly impacted satisfaction (coef . = .20) .

4. User attitude towards end user development was als orelated to end user satisfaction (coef . = .36) .

5. Past computer experience and training was onl yindirectly related to satisfaction (coef = .17) and usage (coef . =.18) .

All of these findings were tested in an effort to validate thi smodel as it relates to end user application effectiveness . W efelt that additional research was needed to test the psycho -metric properties of the quality instruments and to investigateutilization patterns of end users .

THE CURRENT STUDY ON QUALIT Y

Forty organizations agreed to participate in this follow-u presearch, out of a total of 74 randomly chosen Fortune 50 0firms . The sample represented a wide variety of firms wit haverage sales of $6 .6 billion and an average number o femployees of approximately 45,000 . End users who develope dat least one application in the past 12 months were identifie dby a corporate contact individual . Random follow-up telephonecalls to corporate contacts were made within two weeks o fdelivering the instrument to ensure that the respondents had n oproblems understanding or intepreting the measuremen tinstruments . In addition to completing multiple-item scales fo rend user information satisfaction and application utilization ,respondents answered questions regarding corporat edemographics and the seven independent variables in th ecausal model (Figure 2) .

The unit of analysis for this study is an applicatio ndeveloped by the end user . Table 3 indicates the functiona lareas represented by the end users in our study . Over 50% o fthe end user developers in this study developed at least on eapplication per month . Most end users had a high level o fenthusiasm for developing future applications .

DATA BASE

Winter

-92 5

Page 6: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

Table 3 . Functional Areas Represented by End Users

Functional Area # of End Users Prhportion of End UsersFinance/Investment 101 20%Accounting/Audit 86 17 %Marketing 44 9%Customer Service 47 9%Personnel 47 9%General Administration/Planning 45 9%Purchasing 31 6%Other 105 22%TOTAL 50 6

DEVELOPING THE QUALITY INSTRUMENTSThe end user information satisfaction instrument wa s

derived primarily from the Ives, Olson and Baroudi wor k(1983) . The revised IOB instrument focuses primarily on th einformation product component and consists of ten scales (se eTable 4) . The Bailey and Pearson (1983) instrument wa sconsulted and used for more precise scale definitions . Forexample, the volume of output from an application would bedescribed as follows :

Volume of output : the amount of information conveyedto you from the application, expressed not only by th enumber of reports or outputs but also by the volume o fthe output contents .Concise 1 : 2 : 3 : 4 : 5 : 6 Redundan tNecessary 1 : 2 : 3 : 4 : 5 : 6 Unnecessary

Each of the items on the end user information satisfactio nquestionnaire utilized a six-point Likert-type scale . Twoadjective sets were used for each of the items . One globa lquestion was asked to correlate individual scales .

Overall, how would you evaluate your satisfaction o rdissatisfaction with the application that you hav edeveloped ?Very dissatisfied

1 : 2 : 3 : 4 : 5 : 6

Very satisfie d

The utilization instrument was derived primarily from the

work of Bostrom (1978) and Cheney (1980) . Bostrom deve-loped and Cheney refined a set of categories for both intende dand actual utilization which took into account organizationa ltask characteristics (see Table 4) . To capture the multi-dimen-sional nature of intended and actual utilization, two globa lmeasures of intended and actual utilization were added . Oneglobal question using a five point Likert-type scale wa sworded as follows :

Overall, how would you rate your actual use of th especific application you identified earlier?Rarely

1 :2 :3 :4 :5

Often

Overall, how would you rate your intended use of th especific application you identified earlier ?Rarely

1 :2 :3 :4 :5

Ofte n

Face validity of the instruments was tested with a pilo tstudy of 40 middle-level managers within one firm . Each ofthe respondents was given the utilization questionnaire whichwas followed by structured interviews to assess the meaning o findividual questions. As a result of the pretest results, th ephrasing of several questions was changed and the sequencing ,length, and format of the instrument modified. For example ,several users in the pilot study expressed the need for defini-tions to be supplied in the opening pages of the instrument .Also, several categories of Likert-type questions, such as com-puter experience, requir ed the addition of a neutral response .

Table 4. Instrument Operationalization s

6

Winter

-92 DATA BAS E

End llsey Information Satisfactio n1. Accuracy of the application's outpu t2. Timeliness of the application 's outpu t3. Precision of the application 's outpu t4. Reliability of the application 's outpu t5. Currency of the application's outpu t6. Completeness of the application's outpu t7. Volume of outpu t8. Relevancy of the application 's information9. Flexibility of the application

10.User confidence in the application

Utilization Task Characteristic s1. Making decisions2. Looking for trends3. Plannin g4. Taking action5. Finding problems6. Historical referenc e7. Budgetin g8. Staying up-to-date on activitie s9. Controlling and guiding activitie s

10. Aiding in reporting to supervisors11.Aiding in increased productivit y12.Aiding in the cutting of cost s

Page 7: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

ASSESSING THE PSYCHOMETRICPROPERTIES OF THE INSTRUMENTS

The psychometric properties of the instruments were teste dfor reliability and construct validity . The reliability of ameasure refers to its stability over a variety of conditions . I nthe statistical context, reliability is the accuracy or precision o fa measuring instrument . It can be assessed across a set o fmeasures which purport to measure the same theoretica lvariable . The most widely used summary statistic fo restimating internal consistency is the Cronbach alph acoefficient, based upon item intercorrelations .

All unique correlations for scales in each of the thre einstruments (intended usage, actual usage, and end use rinformation satisfaction) were calculated and compared . Table5 provides the results of our analysis for the three question-naires . Each of the end user information satisfaction variable sconsists of two items derived from the Bailey and Pearso ninstrument . The three instruments appear to provide areasonable level of internally consistent operationalizations o fthe theoretic variables with Cronbach alpha values rangin gfrom .79 to .89 .

Item-to-total correlation values were reported in order t oexplain the contribution of a specific item to the construct .These correlations were computed by deleting each item one a ta time . The lowest item-to-total correlation was reported wit hthe item capturing the timeliness of the application's output .Also, this item did not correlate to any of the other items in th einstrument . Therefore, it was dropped from the composit emeasure .

We then assessed the instruments ' construct validity .Validity of some instruments depends upon the adequacywith which a specified domain of content can be sampled .Kerlinger (1973) recommends the use of the multitrait-multimethod (MTMM) approach for bringing evidence tobear on convergent and discriminant validities of acomposite scale . Ghiselli, Campbell, and Zedeck, (1981 )define the MTMM technique as a matrix for organizin gcorrelationa data such that construct validity can be assesse dfrom the examination of convergent and discriminan tvalidities . The MTMM approach, in theory, requires two o rmore variables and at least two dissimilar measuremen tmethods . In practice, similar rather than dissimilar measure sare often administered at a single point in time . Kerlingerrecommends the use of MTMM under these conditions in a neffort to study the validity of newly developed scales : "Inmarry research situations, it is very difficult or impossible toadminister two or more measures of two or more variable sto relatively large samples . Though efforts to study validit ymust always be made, research should not be abandoned jus tbecause the full method is not feasible . "

We ran a large correlation matrix for all of the 134 variable sin the study, which includes 44 variables in the three instru-ments . First, to assess convergent validity, the correlation sbetween measures of the same theoretical variable should b edifferent from zero and statistically large enough to encourag efurther investigation . Second, to assess discriminant validity, ameasure should correlate with all measures of the sam etheoretical variable more highly than it does with any measureof another theoretical variable . The greater the convergent

validity (heteromethod-monotrait correlations) and the smalle rthe discriminant validity (heteromethod-heterotrait correla-tions), the greater the support for the scale's construct validity .

To assess convergent validity, the smallest within-variabl ecorrelation was recorded . Statistical significance was com-puted using the one-tail t-statistic test in order to adequatel yinterpret the correlation values . All of the 3,686 potentia lcorrelations were statistically different from zero at the 0 .00 1level of significance and are therefore considered large enoug hto warrant discriminant validity investigation . No negativ ewithin-variable correlations were observed .

The theory behind discriminant validity calls for near-zer oor low correlations between construct variables . Items within aconstruct should not correlate more highly with measures o fother constructs . A violation occurs for a measure if the lowes twithin-variable correlation is lower than the correlatio nbetween the measure and any other measure of a differen ttheoretic variable . Only 154 of the 3,686 comparisons (p =4 .18%) of within-variable correlations associated higher on ameasure other than within the construct variable . Of the 704correlations between the satisfaction and utilization instru-ments, only five items (p = .0071) appeared to violate th eguidelines for discriminant validity .

Based upon the psychometric assessment, it is our opinio nthat the satisfaction and utilization instruments showe dreasonably high reliability and validity .

INVESTIGATING UTILIZATION PATTERNSTo further explain the usefulness of the utilization instru-

ments for assessing application quality, we investigate dutilization patterns for intended and actual utilization (Table 6) .Further investigation was made into any differences that migh texist between the two utilization types on each dimen-sion i norder to facilitate comparisons . Intuitively, one would alway sexpect to find intended measures scoring higher than actua lmeasures on the same variable . One reason for this is tha tmanagers do not always carry through with plans or intention swith respect to learning new software or developing appli-cations . Statistical significance was computed using the two-tai lt-statistic test in order to adequately interpret the mean responses .

For the sample of 506, seven differences in mean response swere statistically significant at the 0 .01 level of confidence ,and one was statistically significant at the 0 .05 level . Five tas kcharacteristics showed reverse patterns from what w eexpected ; four of those were statistically significant at the 0 .0 1level . Actual utilization of end user-developed applications t oaid in reporting to supervisors and aid in increasin gproductivity were significantly higher than intended utilizatio npatterns (t-ratio = -7 .8 and -6 .85, respectively) . This may bebecause of organizational effectiveness and persona lproductivity gained from use of the application . It was, how -ever, puzzling why end users were not intending to use appli-cations to facilitate taking action in the future (t-ratio = -8 .18) .

DISCUSSIO N

Several important implications can be derived from th eempirical findings of this study, which, in turn, may contribut e

DATA BASE

Winter

-92 7

Page 8: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

Table 5 . Correlations of Items with Total Score

END-USER INFORMATION SATISFACTION INSTRUMEN TItem to Item to

Variable

Accuracy of the application's output

ITEM 1Mean's S.D .

Tota lCorrel .

.811

ITEM 2Mean" S.D .

TotalCorrel .

.76 65 .33

0 .80 5 .29

0 .8 9

Timeliness of the application's output 3 .75

1 .88 .212 5 .17

0 .75 .890Precision of the application's output 5 .32

0 .88 .797 5 .01

0 .90 .83 1

Reliability of the application's output 4 .84

1 .01 .743 5 .20

0 .83 .90 5Currency of the application's output 4,53

1 .30 .692 5 .25

0 .84 .86 5Completeness of the application's output 5 .08

1 .08 .770 5 .05

1 .10 .750Volume of output 4 .89

1,21 .541 5.12

0 .99 .87 8

Relevancy of the application ' s information 5.22

0 .95 .771 4 .98

1 .04 .843Flexibility of the application 4.93

0 .98 .743 5.09

0 .99 .797User confidence in the application 4 .33

1 .33 .740 5 .24

0 .84 .824

OVERALL MEASURE 5 .20

0 .8 1

Cronbach Alpha (entire instrument) .89

Mean represents 6-point Likcrt scal e

UTILIZATION INSTRUMENTS Item to Item t o

VariableINTENDE DMean' S.D .

TotalCorrel .

ACTUALMean* S.D .

TotalCorrel .

Making decisions 3 .31

1 .23 .754 3 .20

1 .38 .71 0Looking for trends 3 .30

1 .31 .814 3 .46

1,17 .65 2Planning 3 .42

132 .707 3 .37

1 .39 .74 1Taking action 2.31

1 .47 .764 2 .95

1,52 .81 3Finding problems 2.95

1 .36 .763 339

1,45 .80 0Historical reference 3 .40

1 .48 .676 2 .63

1 .47 .69 5Budgeting 3 .23

1 .21 .757 3 .12

1 .33 .71 8Staying up-to-date on activities 3 .17

1 .27 .809 3,35

1 .18 .65 9Controlling and guiding activities 3 .37

1 .38 .707 3 .31

1 .36 .74 6Aiding in reporting to supervisors 2 .31

1 .44 .755 2 .91

1 .49 .81 9Aiding in increasing productivity 2 .85

1 .34 .756 3 .35

1 .14 .79 5Aiding in the cutting of costs 3 .32

1 .43 .693 2 .60

1 .43 .68 6

OVERALL MEASURE 3.08

0 .75 3 .1 4

Cronbach Alpha .82 .79

Mean represents 5-point Likert scal e

8

Winter

-92 DATA BAS E

Page 9: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

to research in and management of end user computing . First, astatistically reliable and valid instrument which measure sintended and actual utilization patterns, when coupled with en duser information satisfaction, represents a step forward i nassessing quality of end user-developed applications . Second ,this research is managerial in nature, addressing th ejustification of information technology purchases .

Developing good and reliable measures of applicatio nquality has proven quite difficult in the past . Comparison swith other studies can be misleading due to variances i nsamples and instruments, especially with respect t oapplication usage . The lack of validity tests in publishe dstudies uncovers a serious problem for researchers attemp-ting to duplicate or test existing models and associate dinstruments . Application utilization has been used often an dyet has been inadequate in its efforts to help improve mana-gerial performance through the effective use of informatio ntechnology by end users .

The end user information satisfaction component of qualit yhas been investigated in more depth and rigor . Therefore, it isnot surprising to procure a reliability coefficient of 89% for th esatisfaction instrument . We focused on the product componen tof the IOB user information satisfaction instrument and we fee lthat the modified IOB instrument is useful in end use rcomputing environments .

We are concerned, however, that should managers wish t oassess the superiority of applications developed by differen tcategories of end users, the user knowledge and trainin gcomponent will not be adequately captured . Although we didnot empirically compare the Doll and Torkzadeh instrumen tagainst the modified IOB in the end user computing environ-ment, the IOB may have broader use for applications devel-oped by functional support, EUC support, and DP program-ming end users . We suggest that future research efforts b edirected at capturing the user knowledge and training compon-ent of end user computing satisfaction for a wider variety o fend users and applications .

The application utilization instruments in this study

considered two dimensions—task characteristics and intende dversus actual utilization patterns—in order to capture th emultidimensional nature of the utilization construct. We feelthat this addressed certain problems with earlier single-ite mmeasures of usage . The need for a generalized and standard-ized instrument to measure utilization is crucial to academic sas well as corporate managers . From the qualitative data of thi sstudy, managers commented on the fact that justification o fend user computing resources is most often based (and ofte nsolely based) on intended usage . Historical data comparin gintended with actual utilization patterns may expose areas o fneed and recommend areas for organizational support .

CONCLUSION

The purpose of this study was to examine the qualit ydimension of end user developed applications . We definedquality, in this context, as the degree to which an application o fhigh grade accomplishes or attains its goal from the per-spective of the end user . We addressed quality as a com-bination of end user information satisfaction (which addresse dthe high grade component of quality) with applicatio nutilization (which deals with the attainment of goals) . The enduser information satisfaction instrument was derived primaril yfrom the work of Ives, Olson and Baroudi, focusing on th einformation product component . The two utilization instru-ments capture task characteristics along the dimensions ofintended and actual usage . The instruments were tested for fac evalidity, internal consistency, and construct validity .

The satisfaction instrument may prove to be useful in bot hthe end user computing environment and in assessing th esuperiority of larger information systems providing compara-tive results for strategic planning efforts . We argue that endusers who are categorically technicians (developing applica-tions for others) may be closer aligned to the IS group than th egeneral population of knowledge workers in the firm . We do

Table 6. Statistical Differences Along Task Characteristics Dimensio n

Task_CharacteristicsMaking decisionsLooking for trend sPlanningTaking actionFinding problem sHistorical referenc eBudgetin gStaying up-to-date on activitie sControlling and guiding activitie sAiding in reporting to supervisor sAiding in increasing productivit yAiding in cutting of cost s

Significance levels

* 0 .0 5n=_506

Intended Mea n3 .3 13 .303,422 .3 12 .9 53 .403 .2 33 .1 73 .372.3 12 .853 .32

** 0 .01

Actual Mean t-ratio3 .20 1 .693 .46 -2 .06x `3 .37 0 .692 .95 -8 .18` *3 .39 -5 .86* *2 .63 10 .05* *3 .12 1 .623 .35 2 .88* *3 .31 0 .862 .91 -7 .80* *3 .35 -6 .85* *2 .60 9 .60**

DATA BASE

Winter

-92 9

Page 10: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

not advocate the use of two different satisfaction instrument swhen one instrument might provide more integrated informa-tion for management .

The application utilization instrument can address the cross-related issues of intended versus actual usage with tas kcharacteristics providing corporate managers with a bette rvehicle with which to assess costly investments in informatio ntechnologies . Utilization measures can even be further im-proved upon by examining underlying dimensions of th econstruct, going to referent theories for operationalization sand, most important, empirical testing . We feel that a newdimension should be considered by future researchers to bette rexplain differences in utilization patterns . User sophistication ,the depth to which users use information technology for avariety of tasks, is an important dimension which may help t oexplain the extreme variance in utilization patterns reported ina variety of research studies . From the data in this study w erecommend that managers struggling with the allocation o fcorporate resources assess both intended and actual utilizatio npatterns over a time in order to identify areas where the tw opatterns diverge . This will allow managers and researchersalike to get a better handle on problem areas .

Finally, we suggest that researchers focus on the com-prehensive quality construct for end user-developed applic-ations that may contribute to the success of the organization . I norder to do this, we must be tuned into constructs that aremultidimensional in nature and attempt to tackle the problemof creating standard definitions of quality and instruments thatsupport those definitions .

REFERENCES

Alavi, M . and Weiss, I . "Managing the Risks Associated wit hEnd-User Computing," Journal of Management Informatio nSystems, Volume 2, Number 3, 1986, pp . 5-20 .

Amoroso, D.L. "Understanding the End User : The Key t oManaging End-User Computing," Proceedings of the 199 0Information Resources Management Association Nationa lConference, May 14-16, 1990, Hershey, Pennsylvania .

Amoroso, D .L . "Organizational Issues of End-UserComputing, " Data Base, Volume 19, Numbers 3-4, 1988 ,pp . 49-58 .

Amoroso, D.L. and Cheney, P .H. "A Report on the State o fEnd-User Computing in Large North American Insuranc eFirms," Journal of Information Management, Volume 8 ,Number 2, 1987, pp . 39-48 .

Amoroso, D .L. and Cheney, P .H. "Testing a Causal Model o fEnd-User Application Effectiveness," Journal ofManagement Information Systems, Volume 8, Number 1 ,1991, pp . 63-89 .

Amoroso, D.L ., McFadden, F ., and Britton White, K ."Distributing Realities Concerning Data Policies i nOrganizations, " Information Resources Managemen tJournal, Volume 3, Number 2, (year), pp . 18-27 .

Bailey, J .E. and Pearson, S .W . "Development of a Tool fo rMeasuring and Analyzing Computer User Satisfaction, "Management Science, Volume 29, Number 5, May 1983 ,pp . 530-545 .

10

Winter

-92 DATA BASE

Baroudi, J., Olson, M., and Ives ., B . "An Empirical Study o fthe Impact of User Involvement on System Usage andInformation Satisfaction." Communications of the ACM ,Volume 29, Number 3, 1986, pp . 232-238 .

Benjamin, R .I . "Information Technology in the 1990s : A Lon gRange Plannin g Scenario, " MIS Quarterly, Volume 6 ,Number 2, June 1982, pp . 11-31 .

Bostrom, R . Conflict Handling and Power in the Redesig nProcess : A Field Investigation of the Relationship Betwee nMIS Users and System Maintenance Personnel, unpublishe dPh .D. dissertation, University of Minnesota, 1978 .

Brancheau, J .C ., Vogel, D., and Wetherbe, J .C. "AnInvestigation of the Information Center from the User' sPerspective," DATA BASE, Volume 17, Number 1, Spring ,1985, pp . 4-16 .

Brancheau, J .C . and Wetherbe, J .C . "The Adoption o fSpreadsheet Software : Testing Innovation Diffusion Theor yin the Context of End-User Computing, " Informatio nSystems Research, Volume 1, Number 2, 1990, pp . 1150 -143 .

Cheney, P .H. "Measuring the Success of MIS Developmen tProjects : A Behavioral Approach," Working Paper #1, Iow aState University, 1980 ,

Cheney, P .H., Mann, R .I ., and Amoroso, D .L. "OrganizationalFactors Affecting the Success of End-User Computing, "Journal of Management Information Systems, Volume 3 ,Number 1, 1986, pp . 65-80 .

Davis, G .B . "Caution: User Developed Systems Can B eDangerous to Your Organization," MISRC Working Paper#82-04, University of Minnesota, 1982 .

Doll, W. J . and Torkzadeh, G . "The Measurement of End-Use rComputing Satisfaction," MIS Quarterly, Volume 12 ,Number 2, June 1988, pp . 259-274 .

Doll, W. J . and Torkzadeh, G . "The Measurement of End-Use rComputing Satisfaction: Theoretical and Methodologica lIssues, " MIS Quarterly, Volume 15, Number 1, March1991, pp . 5-10 .

Fuerst, W . and Cheney, P .H. "Factors Affecting the Perceive dUtilization of Computer-Based Decision Support System sin the Oil Industry," Decision Sciences, Volume 13 ,Number 4, October 1982, pp . 554-569 .

Gerrity, T . and Rockart, J . "Managing End User Computing i nthe Information Era," Sloan Management Review, Volume27, Number 4, Summer 1986, pp . 25-34 .

Glorfield, K .D ., Luster, P .L . and Cronan, T .P . "Use rInformation Satisfaction : A Comparison of the Ives, Olso nBaroudi, and Doll/Torkzadeh Measures," Proceedings ofthe National Decision Sciences Institute, Miami, Florida ,November 1991, pp 658-663 .

Goodhue, D. "IS Attitudes : Toward Theoretical and Definitio nClarity," Proceedings of the 7th International Conferenceon Information Systems, San Diego, California, December1985, pp . 181-194 ,

Guimaraes, T . and Gupta, A . "Personal Computing an dSupport Services," Omega, Volume 15, Number 6, 1987 ,pp . 467-475 .

Hackathorn, R . "End-User Computing by Top Executives, "Data Base, Volume 19, Number 1, Fall/Winter 1987/88, pp .1-9 .

Iivari, J . "User Information Satisfaction (UIS) Reconsidered :

Page 11: Quality End User-Developed Applications : Some Essential ...cin.ufpe.br/~jmmn/papers/p1-amoroso.pdf · Both Quillard, et al ., (1983) and Amoroso and Cheney (1987, 1991) suggest that

An Information System as the Antecedent of UIS, "Proceedings of the 8th International Conference o nInformation Systems, Pittsburgh, Pennsylvania, Decembe r1987, pp . 57-73 .

Ives, B . and Olson, M . "User Involvement and MIS Success : AReview of Research," Management Science, Volume 30,Number 5, 1984, pp . 586-603 .

Ives, B ., Olson, M ., and Baroudi, J . "The Measurement of Use rInformation Satisfaction," Communications of the ACM ,Volume 26, Number 10, October 1983, pp . 785-793 .

Jobber, D . and Watts, M. "Behavioral Aspects of MarketingInformation Systems," Omega, volume 14, Number 1 ,1986, pp . 69-79 .

Kasper, G. and Cerveney, R . "A Laboratory Study of Use rCharacteristics and Decision-Making Performance in End -User Computing," Information and Management, Volum e9, Number 2, 1985, pp . 87-96 .

Kerlinger, F.N . Foundations of Behavioral Research, 2nd Ed . ,New York : Hold, Rinehard, and Winston, Inc ., 1973 .

Lucas, H . "Performance and the Use of an InformationSystem, " Management Science, Volume 21, Number 8 ,April 1975, pp. 908-919 .

Martin, J . Application Development Without Programmers ,Englewood Cliffs : Prentice-Hall, 1982 .

Melone, N .P . "A Theoretical Assessment of the User -Satisfaction Construct in Information Systems Research, "Management Science, Volume 36, Number 1, January 1990 ,pp . 76-91 ,

Quillard, J ., Rockart, J ., Wilde, E ., Vernon, M., and Mock, G ."A Study of the Corporate Use of Personal Computers, "CISR-WP-109 working paper, Massachusetts Institute o fTechnology, Massachusetts, 1983 .

Random House Unabridged Dictiona ry, 2nd Ed., New York :Random House, 1972 .

Rivard, S . and Huff, S . "An Empirical Study of Users a sApplication Developers," Information and Management ,Volume 8, Number 2, February 1985, pp . 89-102 .

Robey, D . "User Attitudes and Management Informatio nSystems Use," Academy of Management Journal, Volum e22, Number 3, 1979, pp . 527-538 .

Rockart, J .F. and Flannery, L .S . "The Management of En dUser Computing," Communications of the ACM, Volum e26, Number 10, October 1983, pp . 776-784 .

Srinivasan, A . "Alternative Measures of System Effectiveness :Associations and Implications," MIS Quarterly, Volume 9 ,Number 3, September 1985, pp, 243-253 .

Sumner, M . and Klepper, R . "The Nature and Scope of User -Developed Applications," Proceedings of the America nInstitute for Decision Sciences, Las Vegas, Nevada ,November 1985, 300-311 .

Treacy, M. "An Empirical Examination of a Causal Model o fUser Information Satisfaction," Proceedings of the 6t hInternational Conference on Information Systems ,Indianapolis, Indiana, December 1985, pp . 285-297 .

Trice, A . and Treacy, M . "An Empirical Examination of aCausal Model of User Information Satisfaction, "Proceedings of the 7th International Conference o nInformation Systems, San Diego, California, December1986, pp . 227-229 .

Wold, H. "Systems Analysis by Partial Least Squares," in

Measuring the Unmeasurable, Nijkamp, P ., Leitner, P ., andWrigley, N . (eds .), Boston : Martinus Nijhoff, 1985 .

Yaverbaum, G. "Critical Factors in the User Environment : A nExperimental Study of Users, Organizations, and Tasks, "MIS Quar terly, Volume 12, Number 1, March 1988, pp . 75 -88 .

Zmud, R . " Individual Differences and MIS Success : A Reviewof the Empirical Literature," Management Science, Volume25, Number 10, October 1979, pp . 966-979 .

Donald L . Amoroso is assistant professor of informatio nsystems at the University of Colorado at Colorado Springs . H eholds a Ph .D. in MIS from the University of Georgia and ha sten years experience in the informatiomn systems field with awide range of technical, managerial, and consultative position sincluding the Bureau of Land Management and Canada Post .Dr . Amoroso has published in leading information system sjournals, such as the Journal of Management Informatio nSystems, Information and Management, Data Base, andInformation Resource Management Journal . His curren tresearch is on measuring the impact of emerging technologies ,specifically in the areas of information engineering an dcreativity in systems design . He is authoring a textbook wit hMitchell/McGraw Hill entitled Decision Making Using Lotus1-2-3 ; Building Quality Applications .

Paul H . Cheney is professor and chair of Informatio nSystems and Decision Sciences at the University of Sout hFlorida . He received his doctorate in MIS from the Universit yof Minnesota . Dr . Cheney has published several texts and hasauthored over 30 scholarly articles in journals such as DecisionSciences, Journal of Management Information Systems, MISQuarterly, and the Academy of Management Journal . He ha sserved with over 100 firms, including Ford and Exxon . Dr .Cheney is internationally known in the areas of offic eautomation, end user computing, and implementation manage-ment, and is a well-known speaker before professional groups .

DATA BASE

Winter

-92 11