Digital Tech and the Productivity Pparadox

download Digital Tech and the Productivity Pparadox

of 36

Transcript of Digital Tech and the Productivity Pparadox

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    1/36

    Digital Technology and the Productivity Paradox:

    After Ten Years, What Has Been Learned?

    by

    Paul A. DavidStanford University & All Souls College, Oxford

    Draft: 20 May, 1999

    This paper has been prepared for presentation to the Conference

    Understanding the Digital Economy: Data, Tools and Research, held at the

    U.S. Department of Commerce, Washington, D.C., 25-26 May 1999.

    Please do not reproduce without authors permission.

    Contact Author: Professor Paul A. David, All Souls College, Oxford OX1 4AL, UKTele: 44+(0)1865+279313 (direct); +279281 (secy: M. Holme);

    Fax:44+(0)1865+279299; E-mail:

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    2/36

    2

    Digital Technology and the Productivity Paradox:

    After Ten Years, What Has Been Learned?

    1. The Computer Revolution and the Economy

    Over the past forty years, computers have evolved from a specialised and limited role inthe information processing and communication processes of modern organisations to become a

    general purpose tool that can be found in use virtually everywhere, although not everywhere to

    the same extent. Whereas once "electronic computers" were large machines surrounded by

    peripheral equipment and tended by specialised technical staff working in specially constructedand air conditioned centres, today computing equipment is to be found on the desktops and

    work areas of secretaries, factory workers and shipping clerks, often side by side with thetelecommunication equipment linking organisations to their suppliers and customers. In theprocess, computers and networks of computers have become an integral part of the research and

    design operations of most enterprises and, increasingly, an essential tool supporting control and

    decision-making at both middle and top management levels. In the latter half of this forty year

    revolution, microprocessors allowed computers to escape from their 'boxes', embeddinginformation processing in a growing array of artefacts as diverse as greeting cards and

    automobiles and thus extending the 'reach' of this technology well beyond its traditional

    boundaries.

    Changes attributed to this technology include new patterns of work organisation and

    worker productivity, job creation and loss, profit and loss of companies, and, ultimately,prospects for economic growth, national security and the quality of life. (Genetic engineeringmay one day inherit this mantle.) Not since the opening of the atomic age, with its promises

    of power too cheap to meter and threats of nuclear incineration, has a technology so deeply

    captured the imagination of the public. It is not surprising, therefore, that perceptions aboutthis technology are shaping public policies in areas as diverse as education and macroeconomic

    management.

    One indication of the wider importance of the issues motivating the research presented

    in this volume can be read in their connection with the rhetoric, and, arguably, the substance of

    U.S. monetary policy responses to the remarkable economic expansion that has marked the

    1990's. For a number of years in mid-decade the Federal Reserve Board Chairman, AlanGreenspan, subscribed publicly to a strongly optimistic reading of the American economy's

    prospects for sustaining rapid expansion and rising real incomes without generating unhealthy

    inflationary pressures. His conviction that monetary tightening was not called for by the signs

    of accelerating output growth and new lows in the unemployment rate appears to have beenbased in large part upon the anticipation of big productivity payoffs from the investments that

    have been made in information technology.

    Like many other observers, the Chairman of the Fed viewed the rising volume of

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    3/36

    3

    expenditures by corporations for electronic office and telecommunications equipment since thelate 1980s as part of a far-reaching technological and economic transformation in which the

    U.S. economy is taking the lead:1

    We are living through one of those rare, perhaps once-in-a-century

    events....The advent of the transistor and the integrated circuit and, as a

    consequence, the emergence of modern computer, telecommunication andsatellite technologies have fundamentally changed the structure of the

    American economy.

    During 1996, when he was resisting calls within the Fed to raise interest rates in order to check

    the pace of expansion and damp down incipient incipient, Chairman Greenspan stressed just

    those structural reasons for believing that the relationship between the rate of real output

    expansion, unemployment rate and the underlying rate of price inflation had been altered, andthe persisting buoyancy of the economy did not call for the imposition of monetary restraints:

    2

    The rapid acceleration of computer and telecommunication technologies can

    reasonably be expected to appreciably raise our productivity and standards ofliving in the 21st century, and quite possibly in some of the remaining years

    of this century.

    1 Testimony of Alan Greenspan before the U.S. House of Representatives Committee on Banking and

    Financial Services, July 23, 1996.2 Alan Greenspan, Remarks before the National Governors Association, Feb. 5, 1996, p. 1, quoted by B.

    Davis and D. Wessel,Prosperity: The Coming 20-Year Boom and What It Means to You, (forthcoming in 1998), Ch.

    10.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    4/36

    4

    Yet, there were other members of the Feds Board of Governors who demurred -- albeitineffectually -- from the Chairmans relaxed monetary policy prescriptions, and there has been

    no lack of dissent from the forecast of an imminent productivity bonanza. Soon after Professor

    Alan Blinder of Princeton ended his term as Vice Chairman of the Fed, he and his colleagueRichard Quandt co-authored a paper3debunking the notion that the IT revolution already had

    greatly boosted productivity. According to Blinder and Quandt (1997: pp. 14-15), even if the

    information technology revolution has the potential to significantly raise the rate of growth ofproductivity in the long-run, the long-run is uncomfortably vague as a time-scale in matters of

    macroeconomic managment. Going further, they put forward a thoroughly skeptical appraisal

    of the prospects for a coming IT-based productivity surge -- likening the optimists on thatquestion to the characters in Becketts play Waiting for Godot.

    4 In their view we may be

    condemned to an extended period of transition which the growing pains change in nature, but

    dont go away.

    Some diminution of skepticism of this variety has accompanied the quickening of laborproductivity growth since 1997, and especially the very recent return of the rate of increase in

    real GDP per manhour to the neighborhood of 2 percent per annum. But it remains premature

    to try reading structural causes in what may well be transient, or cyclical movements which, in

    any case, have yet to materially alter the U.S. economys productivity trend over the course ofthe 1990's.

    2. The Making and Unmaking of a Paradox -- And the Persisting Puzzle

    Disagreements among economists about the underlying empirical question hardly are a

    new thing. More than a decade has now passed since concerns about the relationship betweenthe progress in the field of information technology and the improvement of productivity

    performance in the economy at large crystallized around the perception that the U.S., along with

    other advanced industrial economies, were confronted by a disturbing productivity paradox.The precipitating event in the formation of this view was the rather offhand observation made in

    the summer of 1987 by Robert Solow, Institute Professor at M.I.T., and Economics Nobel

    Laureate. In the course of a book review Solow remarked: You can see the computer ageeverywhere but in the productivity statistics.

    5 This characteristically pithy comment soon

    was being quoted by the business press, repeated in government policy memos, and quickly

    became the touchstone for proposals to convene academic conferences to assess the state of

    knowledge on the subject of technology and productivity. In no time at all the question of whythe information technology revolution had not sparked a surge in productivity improvements

    and a consequent supply-drive wave of economic growth was universally referred to as "the

    Productivity Paradox." Almost overnight it reached the status of being treated as the leadingeconomic puzzle of the late twentieth century.

    3 Alan S. Blinder and Richard E. Quandt, Waiting for Godot: Information Technology and the the

    Productivity Miracle?, Princeton University Department of Economics Working Paper, May 1997. 4 Samuel Beckett, Waiting for Godot, A Tragicomedy in Two Acts, 2nd ed., London: Faber and Faber, 1965.

    5 Robert M. Solow, Wed Better Watch Out, New York Review of Books, July 12, 1987, p. 36.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    5/36

    5

    All this commotion was not undeserved. Professor Solows remark had served todirect public attention to what he and some other observers took to be a perplexing, and rather

    ironic conjunction of two conflicting impressions about the rate of technological advance in the

    industrialised West. On the one hand, excitement in the business press had persisted sinceTime Magazinedeclared the computer as the 'person of the year' in 1982. With the continuing

    rapid pace of innovation and rapid growth of investment in computer technologies, industry and

    academic economists were preoccupied not only with the immediate effects of the computer butwith the potentially even larger implications of the accelerating convergence of

    telecommunication and computer technologies.

    On the other hand, in the U.S. Private Domestic Economy the average annual pace of

    increase of labor productivity had been cut by well more than half between the periods

    1948-1966 and 1966-1989, tumbling from 3.11 to 1.23 per cent per annum.6 This was

    sufficient to pull the secular trend rate for the entire post-WW II era (1948-1989) back down to2.05 per cent per annum, which was about the same pace that had been maintained thoughout

    the preceeding half-century. In other words, during 1966-1989 labor productivity grew at an

    anomalously sluggish pace, only a bit more than half the 2 percentage points per annum rate

    that has been characteristic of the epoch of modern economic growth, and barely half the (2.52percent per annum) average rate that was achieved in the U.S. private economy over the

    1929-1966 interval.

    But, even more disturbing was the evidence that the slowed rise in labor productivity

    reflected a collapse in the growth rate of total factor productivity (TFP): a measure of the crude

    TFP residual, that is the portion of the output per manhours growth rate that is not attributable to

    contribution of increasing capital per manhour, fell to 0.66 percentage points per annum during1966-89. That slow a total productivity growth rate had not been experienced in the US

    Private Domestic Economy for a comparably extended period since the middle of the

    nineteenth century. Recent calculations by Abramovitz and David (1998) show that during1966-1989 the refined TFP growth rate (adjusting for composition-related quality changes in

    labor and capital inputs) plummeted to 0.04 percent per year. Contrast that with the average

    annual rate of 1.4 percent that had been maintained over the whole of the 1890-1966 period.The slowdown was so pronounced that it brought the TFP growth rate all the way back down to

    the low historical levels that statistical reonstructions of the mid-nineteenth century economys

    record reveals.

    Superior estimates of real gross output for the private non-farm businesseconomy have

    become available from the BLS (May 6, 1998:USDL News Release 98-187). These provide a

    more accurate picture, by deflating the current value of output using price indexes thatre-weight the prices of the component goods and services in accord with the changingcomposition of the aggregate. These chain-weighted measures lead to productivity growth

    measures which reveal two further points about the slowdown. The first is that rather than

    6 The figures cited are based on estimates of real gross product per manhour, assembled by M. Abramovitz

    and P. A. David in American Macroeconomic Growth in the Era of Knowledge -Driven Progress: A Long-Run

    Interpretation, Unpublished memorandum. December 1998 .

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    6/36

    6

    becoming less marked as the oil shock and inflationary disturbances of the 1970's and therecession of the early 1980's passed into history, the productivity growth rates deviation below

    the trend that had prevailed during 1950-1972 interval had grown even more pronounced during

    the late 1980s and early 1990's. Labor productivity increased during 1988-1996 at 0.83 percentper annum, fully 0.5 percentage points less quickly than the average rate during 1972-1988.

    Correspondingly, the refined TFP growth rate estimate (without allowance for vintage

    effects) sank to 0.11 percent per annum, which represented a further drop of 0.24 percentagepoints from the 1972-88 pace.

    Perceptions of the productivity paradox as a failure of new technology to accelerate the growth

    of the task efficiency of resource use emerged as much from "the bottom up" view of theeconomy as from a "top down" view of highly aggregated categories provided by national

    product accountants. Indeed, the notion that there is something anomalous about the prevailing

    state of affairs drew much of its initial appeal from the apparent failure of the wave of

    innovations based on the microprocessor and the memory chip to elicit surges of growth inlabour productivity and multifactor productivity from those very branches of of the U.S. service

    sector that were investing so heavily in computers and office equipment. Worse still, despite

    the hopes that many continued to hold that the "information technology revolution" would soon

    spark a revival, little support for such optimism could be found in the first wave of empiricalstudies that looked into this question systematically. Some saw no association between greater

    use of computers and higher productivity, whereas others reported that heavy outlays on

    equipment embodying new information technologies appeared to be associated withdecreasesin productivity or subnormal rates of return on investment (see, e.g., Roach (1987,

    1988, 1991), Loveman (1988), Morrison and Berndt (1990).

    The economics profession's reactions to these anomalous and perplexing developments havetended to be expressed in one of three characteristic positions: either the Solow paradox is an

    artefact of inadequate statistical measure of the economys true peformance, or there has been a

    vast over-selling of the productivity enhancing potential of IT, or the promise of a profoundimpact upon productivity has not been mere hype, but the transition to the techno-economic

    system in which this will be fulfilled should be understood to be a more arduous, costly, and

    drawn out affair than was initially supposed. We shall first consider these positions morefully, before turning in the Part II to assess the light that each has been able to shed on recent

    experience, and therefore on future prospects.

    2.1 Doubts About the Data -- Distortions in the Mirror of Economic Reality

    First to be considered are the numbers skeptics who see the problem to reside mainlyin the productivity statisticsthemselves. The suggest that Professor Solow's formulation of thesupposed paradox contains its own resolution. The economy could well be experiencing

    substantial real gains in "productivity" from succeeding waves of ICT innovations, but their

    impact are masked by defects and deficiencies in the available measures of economicperformance. By this account, faulty measures have seriously understated the growth of real

    output, thereby creating the appearance of spuriously low rates of growth in both labour

    productivity and total factor productivity.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    7/36

    7

    Within the camp of economists who pointed to "measurement errors" as the resolution

    of the productivity paradox there was no clear consensus as to when these errors became

    dominant. For example, should the same explanation be applied to account for the slowdownobserved in the pace of productivity since the mid-1970's? By the same token, comparatively

    little attention has been given to drawing a tighter connection between the suspected severity of

    the measurement error problem in recent years, and the nature of developments characterisingthe information revolution itself which would support the proposed direction of causation.

    The productivity slowdown emerged by the early 1980s, and thus before the personal

    computer's contribution to the rapid advance of the information revolution during the 1980s.This raises the possibility that mechanisms responsible for the slowdown also could have a

    major role in the creation of a computer productivity paradox, inasmuch as it was the

    proliferation of personal computers to which Professor Solow was referring in 1987. During

    the past twenty years, several significant structural changes have occurred in the U.S. economyincluding the continued growth of service activities share of national output, the proliferation of

    new products and the enlargement of the role of knowledge-based industries in the economy.

    Each of these structural changes as well as accompanying organisational changes are also

    linked to information and communication technologies. It is therefore possible that theproductivity slowdown and the computer productivity paradox are different and mutually

    reinforcing features of the same set of phenomenon.

    2.2 Delusionment with IT -- Maybe Godot Doesnt Exist and Wont Be Coming

    In a second explanatory category, as our previous reference to the views of Professor

    Blinder indicates, there are "computer revolution sceptics" who maintain that there really isnothing paradoxical about the flagging post- 1973 pace of productivity advance because notion

    of a "computer revolution" has been based more upon "hype" than concrete evidence promising

    enormous efficiency gains in the branches of the economy where most productive resources areemployed. Quite the contrary, they catalogue the many perverse consequences that have

    followed the introduction of computers into the workplace, especially in the service sector.

    Those who, like Robert Gordon (1997, 1998) have swung round to a scepticism about

    actual impact, certainly can justly point to the existence of essentially transient factors which

    had contributed to the high measured rates of total factor productivity growth that characterised

    the performance of the post-WWII U.S. economy (and other industrial economies to an evenmore marked degree). Therefore, they maintain, a slowdown was not only inevitable, but an

    economy-wide resurgence in the size of "the residual" due to the information technology

    revolution remains highly implausible for the foreseeable future. This is the case, they contend,because the old technologies have been exhausted as a source of further efficiencyimprovement in many lines of tangible commodity production industry, whereas the

    application of ICT in other, "service" industries has fallen far short of expectations.

    Furthermore, other highly promoted sources of commercial innovation, such as advances inbiotechnology, high temperature superconductivity and new materials remain at best rather far

    off in the future as major generators of conventional productivity improvement.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    8/36

    8

    2.3 From the Regime Transition Hypothesis to Cautious Optimism:

    Third, there are those who approach the problem of accounting for what has beenhappening with what might be labelled a spirit of cautious optimism. Explanations of this

    kind have seized upon the idea that we are involved in the extended and complex process of

    transition to a new, information intensive techno-economic regime based, a transition thatinvolves the abandonment of the extensive transformation of some, and renewal of other,

    aspects of the former, mature "Fordist" technological regime. This technological regime,

    based upon techniques of mass production and mass-marketing of standardised goods andservices produced by capital-intensive methods requiring dedicated facilities, high rates of

    through-put, and hierarchical management organisations, had assumed its full-blown form first

    in the U.S. during the second quarter of the present century. In its mature stage it underlay the

    prosperity and rapid growth of the post-WWII era, not only in the U.S. but in other industrialcountries of western Europe, and Japan, where its full elaboration had been delayed by the

    economic and political dislocations of the 1920's and 1930's, as well as by WWII itself.

    Regime transitions of this kind, it is argued, involve profound changes, whoserevolutionary nature is better revealed by their eventual breadth and depth than by the pace at

    which they achieve their influence. Exactly because of their breadth and depth, they require

    the development and coordination of a vast array of complementary tangible and intangibleelements: new physical plant and equipment, new kinds of workforce skills, new organisational

    forms, new forms of legal property, new regulatory frameworks, new habits of mind and

    patterns of taste. For these changes to be set in place requires decades, rather than years, and

    while they are underway there is no guarantee that their dominant effects upon macroeconomicperformance will be positive ones. The emergence of positive effects is neither assured nor

    free from the exploration of blind alleys (trajectories that proved economically nonviable and

    were abandoned). Moreover, the rise of a new techno-economic paradigm can havedislocating, backwash effects upon the performance of surviving elements of the previous

    economic order. In short the "productivity paradox" may be a real phenomenon, paradoxical

    only to those who suppose that the progress of technology is autonomous, continuous and,being "hitchless and glitchless" translates immediately into cost-savings and economic welfare

    improvements.

    Those in the cautious optimist camp suggest that the persistence of slow rates ofproductivity growth should be viewed as a slip between cup and lip which might well be

    corrected with the appropriate attention and co-ordination. If so, the future may well bring a

    resurgence in the measured total factor productivity residual that could be clearly attributed toadvances in ICT. They are therefore intent to divine the early harbingers of a more widespreadrecovery in productivity growth. But they acknowledge that such a renaissance is not

    guaranteed by any automatic market mechanism and argue that it is foolish simply to wait for its

    arrival. They point, instead, to the need to concurrently address many of the technologicaldrawbacks that already have been exposed, as well as the difficult non-technological,

    organisational and institutional problems that must be overcome in both the economy's public

    and private spheres in order to realise the full potential of the "information revolution".

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    9/36

    9

    The foregoing caricatures of the three main explanatory camps into which the

    economists seem to have organized themselves are sufficiently overdrawn that it would be

    inappropriate to firmly attach names to the various positions. Yet, together they do convey thespirit of much of the recent discussion in the scholarly journals, government studies, and the

    business press. Rather than approaching the task of interpreting the evidence as one of

    evaluating these as alternative and conflicting hypotheses among which it is necessary tochoose a prime resolution for the productivity paradox, an effort at synthesis appears far more

    appropriate. From the evidence that has been accumulating during the past decade it seems

    increasingly clear, first, that there is less that is really paradoxical in the economic record, butconsiderable basis remains for puzzlement regarding the sources of the total factor productivity

    slowdown. Secondly, while each of the proposed lines of explanation has something to

    contribute towards explaining the latter development, greater weight can be placed upon a

    combination of adverse productivity concommittants of the regime transition, and severalmeasurement problems that have been exacerbated by the digital technology revolution itself.

    3. Unmaking the Paradox -- and leaving a Persisting Puzzle

    That there is a quite simple and positive linkage between new technological artefacts

    and economic growth remains a view widely held among economists today. Improvedtechnological artefacts are likened to better tools and better tools are supposed make workers

    more productive. Moreover, the prospect of higher productivity for the users of tools

    embodying the new technologies ought to encourage investment in those assets, and thereby

    stimulate demand. If the relative price of these tools are falling, should we not be observinggrowth in both the volume of sales of the tools and the productivity achieved by using them?

    There is nothing particularly complicated about such a process. New organizations of

    production techniques may simply permit adaptation to changing relative input prices; theycould allow the substitution of inputs whose relative prices are declining for those whose

    relative prices are rising, thereby averting upward pressure on real costs per unit of output --

    without necessarily increasing multifactor productivity. If relatively rapid technologicalprogress is taking place in the sector that is providing these better tools, the consequent fall in

    their price vis-a-vis other inputs will drive the process of substituting them for those factors of

    production in a variety of applications, and their contribution to the growth of output will come

    in the form of the yields on the necessary investments required put them in place.

    It is plain that the first part of this causal chain is strikingly substantiated by the data for

    the case electronic computers, and by the larger category of equipment to which they belong:office, accounting and computing machinery, abbreviated as OCAM.

    7 According the the

    7 The historical continuity of the U.S. NIPA (National Income and Product Accounts) has been preserved

    during the computer and telecommunication revolution by the assignment of computers to the increasingly

    heterogeneous standard industrial classification computer and office equipment. At this 3-digit level of

    aggregation, general purpose digital computers are counted along with paper punches and pencil sharpeners.

    Telecommunication equipment at the three-digit level is no less discriminating as a category, mingling cellular

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    10/36

    10

    estimates developed by Dale Jorgenson and Kevin Stiroh (1995: Table1), over the period1958-1992 the volume of OCAM investment in the U.S. was increasing at an average yearly

    rate of 20.2 percent, while the associated (constant quality) index of prices was falling at an

    average rate of 10.1 percent per annum; the corresponding rates for computers alone were stillmore spectacular, with investment quantities sky-rocketing at an average rate of 44.3 percent

    per annum while prices were plunging 19.1 percent per year on average.8

    Nothing

    fore-ordained that that once the expansion of computer power had begun it should proceed atsuch an explosive pace, but, the still-small size of the computerized sector in the economy

    during this phase in a sense limited the scope for its breath-taking growth to set up

    complicating, "back-wash" effects at the microeconomic and macroeconomic level, therebyjeopardizing its rapid continuation.

    But, by the same token, rather than supposing that the resulting 20 and 47 percentage

    point per annum trend rates of growth found for the real OCAM stock and the computerequipment stock, respectively,

    9 would translated into commensurately rapid productivity

    increases, it is necessary in that connection also to take into account the comparatively small

    weight that these forms of capital carry the total flow of capital services. Furthermore, the

    productive contribution made by the growth of capital services to the rate of output growthdepends upon magnitude of the "elasticity" of the latter with respect to the former, and that too

    lies well below unity. Both considerations should lead one to expect that the contribution made

    by the rise of computer capital services to the growth of output would be tightly circumscribed.

    Following this familiar line of growth accounting analysis, two path-breaking empirical

    assessments have been made of the contribution to the growth of aggreate output in the U.S. by

    investments embodying computer technology. Oliner and Sichel (1994), and Sichel (1997:Ch.4) have produced one such set of measurements for he private non-farm business sector from

    "computer hardware services"over the long interval 1970-1992, adding estimates for

    "computer software services" in the years 1987-1993. Jorgenson and Stiroh (1995: sect. 4,especially) have implemented essentially the same conceptual approach to gauge the

    contribution of computer capital services (hardware only) within the framework of a more

    elaborate set of growth accounting estimates for the domestic economy as a whole in the yearsbetween 1958 and 1992.

    radio telephones with radio and television broadcasting equipment, grouping private branch exchange equipment

    with telephone answering machines, and throwing in police sirens for completeness. The seeming anachronism of

    these product classifications may yet be reversed by penetration of digital technology into simple artefacts: answering

    machines have already acquired intelligence; perhaps, with the insertion of microprocessor chips, we will soon see

    paper punches, pencil sharpeners and police sirens smarten up as well.8 These movements reflect the corrections made by the substitution of hedonic price indexes which ineffect measure the change in the nominal price-performance ratio of computer equipment. Given the fact that

    the estimates available for the other components of producers durable equipment do not reflect such quality

    corrections, comparisons of the relative growth of the constant dollar value of computers and OCAM within that larger

    investment category are distorted, as are the resulting figures tracing the relative growth of the comput er capital

    stock vis-a-vis the rest of the capital stock. Further discussion of the introduction of hedonic prices for computer

    equipment, and its consequences for statistical comparisons, one should consult Wykoff (1995) as well as Jorgenson

    and Stiroh (1995).9 See, the estimates presented by Jorgenson and Stiroh (1995:Table 2).

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    11/36

    11

    Not surprisingly, the results of these studies concurr in showing a secularly rising

    contribution being made to the output growth rate from this source. Starting from negligibly

    low initial levels, these rise to reach approximately similar significant dimensions by the late1980's and early 1990's. But, "significant" in this context can still means a contribution of

    amounting to only a fraction of a percentage point in the annual output growth rate: Sichel

    (1997: Table 4-2) gives a computer equipment service contribution for 1987-93 of 0.15percentage points, and a combined hardware plus software contribution of 0.26 percentage

    points, whereas Jorgenson and Stiroh's calculations put the hardware capital contribution

    during 1985-92 at the considerably higher level of 0.38 percentage points per years annum.The relative "contributions" estimated for hardware alone thus lie in the modestly low range

    representing 7.5 to 15 percent of the corresponding output growth rates. But, the implied

    range for the relative contributions of hardware and software capital services combined is more

    substantial, representing between 13 and 24 percent of the output growth rate.10

    The circumscribed focus of these studies upon investment in computers does risk

    leaving out the broader significance of the micro-processors advent. In recognition of this

    Kenneth Flamm (1997) presents a variant story, one that undertakes to quantify thesemi-conductor revolution rather than the computer revolation. Considering the rapid rate

    of fall in the price-performance ratio of micro-processors, and their rapid and extensive

    penetration of many areas of application in the economy, Flamms calculations neverthelesspoint to consumer surpluses on this account that represents only 8 percent of annual GDP

    growth. In other words the measured direct contribution to GDP growth, aside from what the

    vendors of micro-processors appropriate through revenues, is found to be about 0.2 percentage

    point per annum.

    10 Jorgenson and Stiroh's higher estimate for computer hardware capital services implies a considerably

    more generous proportional contribution to output growth would have been made by hardware and software services

    combined. Using the relationship between the two components reported by Sichel (1997: Table 4.2) for

    approximately the same years, the implied contribution from software services could be put at 0.28 percentage points.

    The latter must be added to the 2.49 percentage point rate of output given by Jorgenson and Stiroh (1995 : Table 4) for

    1985-92, as service flows from software capital are were not included on either the output or input sides of their

    growth accounting. The combined relative contribution from hardware and software on this basis leads to the figure

    reported as 24 percent in the text: (.38 + .28)/(2.77) x100 = 23.8 percent.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    12/36

    12

    There are several ways in which these results may be read. The fact that by the early1990's the growth of computer capital services was contributing possibly as much as fifth of

    the U.S. economys yearly output growth certainly could be taken as a strong, quantative

    refutation of the argument that the investments poured into this sort of capital formation hadbeen squandered, that no comensurate payoff in terms of output and labor productivity, were

    forthcoming.11

    At the same time, these results can be welcomed as dispelling the widely

    optimistic belief that the spectacular pace at which computer power has been growing musthave induced a productivity bonanza. The latter confusion stems from over-looking the fact

    that no matter how rapidly a productive asset may be accumulating, what also enters into the

    determination of its impact upon output is the size of the flow of productivity services ityields, in relation to the other inputs required for production. In the case of computer services

    this share has been found still to be rather small; even in relationship to just the total flow of

    capital services (gross of depreciation) in the economy, the computer stock's share had only

    reached the 1.0 percent mark by 1985, before the personal computer investment boomenlarged it to 4.5 percent in 1992.

    12

    It is worth noting that the modern conjuncture of high rates of innovation and slow

    measured growth of total factor productivity is not a new, anomalous development in thehistory US economy. Indeed, most of the labor productivity growth rate during the interval

    extending from the 1830's through the 1880's was accounted for by the increasing capital-labor

    input ratio, leaving residual rates of total factor productivity growth that were quite small by thestandards of the early twentieth century, and a fortiori, by those of the post-WW II era. During

    the nineteenth century the emegence of technological changes that were biased strongly in the

    direction of tangible-capital using, involving the substitution of new forms of productive plant

    and equipment involving heavy fixed costs and commensurately expanded scales ofproduction, induced a high rate of capital accumulation. The capital-output ratio rose without

    forcing down the rate of return very markedly, and the substitution of increasing volumes of the

    services of reproducible tangible capital for those of other inputs, dispensing increasingly withthe sweat and the old craft skills of workers in the fields and workshops, along with the brute

    force of horses and mules, worked to increase real output per manhour.13

    11 The counter-argument by technological pessimists would point out that the growth accounting

    calculations of Oliner and Sichel (1994), and Jorgenson and Stiroh (1995) merely assumethat the computer equipment

    investments have been earning the same net rate of return as the other elements of the capital stock (on an after-tax

    basis, in the case of the capital accounting framework developed by Jorgenson (1990) and implemented in his work

    with Stiroh). In view of the continuing investments, that does seem a plausible minimum assumption, but, it is

    nonetheless a supposition, rather than a point that the growth accounting has proved. As will be seen (from the

    contributions by Brynolfsson and Hitt (1995) below, efforts to estimate the rates of return earning on computer capitaleconometrically from data at levels of aggregation below that of the economy, do refute the suggestion that the private

    rates of return have been sub-normal. [This is reinforced by subsequent findings reported by Bynolfsson and Hitt

    (1997, 1998).]12

    These "shares" are calcuated from the estimates in Jorgenson and Stiroh (1995 [Chapter 10]: Table 3).

    The corresponding shares of computer equipment services in the flow of services from non-residential producer

    durable capital are, of course, considerably larger, reaching 18 percent by 1992.13

    See Abramovitz and David 1973, David 1977, Abramovitz 1989. Thus, most of the labor productivity

    growth rate was accounted for by the increasing capital-labor input ratio, leaving residual rates of total factor

    productivity growth that were quite small by the standards of the early twentieth century, and a fortiori, by those of the

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    13/36

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    14/36

    14

    But, viewed from another angle, the spirit of the Solow computer paradox lingers on,like the Chesire Cats smile; if not a real paradox, there remains a puzzle occasioned by the

    surprisingly small a TFP residual. It should be noted that the explanations which stress the

    small the contribution made by the rise of computer capital do so under the dual assumptionthat the investments embodying the new information technologies have been earning only a

    normal 12 percent net rate of return, like the other forms of capital, and that those private

    returns to the investing firms fully capture their contributions to production throughout theeconomy. A different set of calculations made for the period 1987-1993, however, which

    allow for the possibility that computer assets might be generating net rates of return very much

    above the normal (economy wide) average, finds that the biggest plausible effect would addanother 0.3-0.4 percentage points per annum to the growth rate of labor productivity.

    15

    15 Sichel (1997: p.81) makes an allowance that increases the net rate of return about four-fold, and raising

    the gross share of computer (hardware and software) earnings to .034 from .016. This raises the contribution to

    output growth from 0.26 to .56 percentage points per annum. This 0.3 percent per annum increment would be the

    magnitude of the rise in labor productivity growth resulting from allowance for the gap between privately appropriated

    normal earnings and the hypothesized above normal rate of return. An alternative calculation would consider the

    contribution from computer capital-deepening, defined as the growth of computer capital (hardware and software)

    in relation to the growth of real output in the economy. For 1987-1993 the figures presented by Sichel (1997, Table4.2) indicate that the real ratio of computer capital inputs to real GDP was rising at about 14 percentage points per

    annum. The contribution to the aggregate labor productivity growth rate is found by multiplying that rate by the

    ratio of the computer inputs share to the total labor input share in GDP. Calculated under the normal rate of returns

    assumption, that ratio is 0.025 whereas under the Sichel super-normal returns assumption it is 0.052. This implies

    that the effect of increasing computer input intensity in the economy could be said to be raising labor productivity by

    0.34 percentage points per annum under the assumption of normal rates of return, whereas the contribution to the

    labor productivity growth rate on the higher rate of return assumption might be as much as 0.73 percentage points, or

    almost 0.4 percentage points higher. Thus, the text gives 0.3 to 0.4 as the range if a generous allowance is made for

    computer yields above the normal rate of return..

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    15/36

    15

    Yet, might one not have supposed that there were spillovers in the form of inducedlearning effects, benefitting the employers of workers trained on the equipment installed at the

    expense of other firms? Similarly, the use of computer equipment in generating new

    knowledge that found its way into the productive capabilities of those who did not purchase orrent those R&D investment goods, would constitute another plausible form of externality.

    These spill-overs, representing economic value that for one reason or another could not be

    appropriated by (internalized within) the entities that bought and operated the IT capital, implythat a gap would exist between the social and the private rates of return on information capital.

    Nevertheless, to the extent that they contributed to the productive capabilities of the firms to

    whom they accrued, these benefits would have to turn up on the real output side the growthaccountants ledgers for the economy as a whole. Where, then, are the externalities that one

    might have expected from the accumulation of this new class of productive assets? Is the story

    of the computer revolution simply that every individual private investor has received, and can

    only hope to go on receiving exactly what they payed for, and nothing more from the collectiveaccumulation effort?

    These are intriguing questions, indeed, particularly when set against the expectations

    that have been stirred by discussions of the emergence of the distributed intelligence, thenetworked economy and the national and global information infrastructure. Understandably,

    the puzzle continues to compel the attention of economists interested in the long term prospects

    of growth at the macrolevel, and those who seek to understand the microeconomics of thephases of information technology revolution through which we have been passing, and their

    relationship to those that may lie ahead.

    4. Measurement Problems

    Those who would contend that the slowdown puzzle and computer productivity paradoxare consequences of a mis-measurement problem must produce a consistent account of the

    timing and extent of the suspected errors in measurement. The estimation of productivity

    gorwth requires a consistent method for estimating growth rates in the quantities of inputs andoutputs. With a few notable exceptions (e.g. electricity generation), the lack of homogeneity

    in industry output frustrates direct measures of physical output and makes it necessary to

    estimate physical output using a price deflator. Similar challenges arise, of course, in

    measuring the heterogeneous bundles of labor and capital services, but, for the momentattention is being directed to the problems that are suspected to persist in the measures of real

    product growth. Systematic overstatement of prices will introduce a persistent downward

    bias estimated output growth and, therefore, an understatement of both partial and total factorproductivity improvements.

    Such overstatement may arise in several distinct ways. There are some industries,

    especially services, for which the concept of a unit of output itself is not well defined, and,consequently, where it is difficult if not impossible to obtain meaningful price indices. In

    other cases, such as the construction industry, the output is so heterogeneous that it requires

    special efforts to obtain price quotations for comparable products both at an one point in time

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    16/36

    16

    and over time. The introduction of new commodities again raises the problem ofcomparability in forming the price deflators for an industry whose output mix is changing

    radically, and the techniques that statistical agencies have adopted to cope with the temporal

    replacement of old staples with new items in the consumers shopping basket have been foundto introduce systematic biases. These are only the simpler and more straightfoward potential

    worries about mismeasurement, but we shall begin with a consideration of there bearing upon

    the puzzle of the slowdown and the computer productivity paradox, before proceeding to someless tractable conceptual questions.

    4.1 Over-Deflation of Output: Can this explain the Slowdown puzzle?

    The simplest formulation of a mis-mismeasurement explanation for the slowdown falls

    quantitatively short of the mark. This does not mean that there is no underestimation of thelabor productivity or the total factor productivity growth rate. Perhaps the paradox of the

    conjuction of unpredecentedly sluggish productivity growth with an explosive pace of

    technological innovation can be resolved in those terms. There is indeed something to be said

    for taking that more limited approach a step further.

    The Boskin Commission Report (1997) on the accuracy of the official Consumer Price

    Index (CPI) prepared by the U.S. Bureau of Labor Statistics concluded that the statisticalprocedures in use were leading to an anverage overstatement of the annual rate of increase in

    "the real cost of living" by consumer prices by 1.1 percentage points. This might well be

    twice the magnitude of the error introduced by mismeasurement of the price deflators applied

    in estimating the real gross output of the private domestic economy over the period 1966-89.16

    Were we to allow for this, by making an upward correction of the real output growth rate by as

    much as 0.6 to 1.1 percentage points, would lift the level of the Abramovitz-David (1998)

    estimate for the "super-refined" multifactor productivity growth rate back up to the positiverange, putting it at 0.35 - 0.85 percentage points per annum for the 1966-89 period. Even so,

    that correction -- which entertains the extremely dubious assumption that the conjectured

    measurement biases in the output price deflators existed only after 1966 and not before --would still have us believe that between 1929-66 and 1966-89 there had been a very appreciable

    16 The Boskin Commission (1997) was charged with examining the CPI, rather than the GNP and GDP

    deflators prepared by the U.S. Commerce Department Bureau of Economic Analysis, and some substantial part

    (perhaps 0.7 percentage points) of the estimated upward bias of the former is ascribed to the use of a fixed-weight

    scheme for aggregating the constituent price series to create the overall ( so-called Laspeyres type of ) price index.

    This criticism does not apply in the case of the national product deflators, and consequently the 1.1 percent per annumfigurecould be regarded a generous allowance for the biase due to other, technical problems affecting those deflators.

    On the other hand, the Boskim Commission's findings have met with some critiicism from BLS staff, who have

    pointed out that the published CPI reflects corrections that already are regularly made to counteract some of the

    most prominent among the suspected procedural sources of overstatement --the methods of "splicing in" price series

    for new goods and services. See Maddrick (1997) ; it is claimed that , on this account, the magnitude of the

    continuing upward bias projected by the Boskin Commission may be too large. The latter controversey does not

    affect our present illustrative use of the the 1.1 percentage point per annum estimate, because we are applying it as a

    correction of the past trend in measured real output, and, in the nature of the argument, an upper-bound figure for the

    measurement bias is what is wanted.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    17/36

    17

    slowdown in multifactor productivity growth -- a fall-off in the average annual pace of some0.3 - 0. 8 percentage points. Moreover, there is nothing in the findings of the Boskin

    Commission (1997) to indicate that the causes of the putative current upward bias the price

    changes registered by the CPI have been operating only since the end of the 1960's.

    Plainly, what is needed to give the mismeasurement thesis greater bearing on the puzzle

    of the productivity slowdown, and thereby help us to resolve the information technologyparadox, would be some quantitative indication that the suspected upward bias in the aggregate

    output deflators has been getting proportionally larger over time. Martin Bailey and Robert

    Gordon (1988) initially looked into this and came away without any conclusive answer, andGordon (1996, 1998) moved towards a somewhat less skeptically dismissive position on the

    slowdown being attributable to the worsening of price index mismeasurement errors, some

    further efforts at quantification are required before the possibility can be dismissed.

    4.2 Has the Growth of Hard-to-Measure Activities Led to a Growing Bias?

    It is reasonable to consider whether the nature of the on-going structural changes in the

    U.S. economy, and elsewhere, are such as would exaccerbate the problem of outputunderestimation, and thereby contribute to the appearance of a productivity slowdown. In this

    regard, Zvi Griliches' (1994) observation that there has been relative growth of output and

    employment in the "hard-to-measure" sectors of the economy is undisputable. The bloc of theU.S. private domestic economy comprising Construction, Trade, Finance, Insurance and Real

    Estate (FIRE), and miscellaneous other Services has indeed been growing in relative

    importance, and this trend in recent decades has been especially pronounced. Gross product

    originating in Griliches hard-to-measure bloc average 49.6 percent of the total over the years1947-1969, but its average share was 59.7 percent in the years 1969-1990.

    17

    Nevertheless, is the impact of the economys structural drift towards unmeasurabilitybig enough to account for the appearance of a productivity slowdown between the pre- and

    post-1969 periods? Given the observed trend difference (over the whole period 1947-1990)

    between the labor productivity growth rates of the "hard-to-measure" and the "measureable"sectors that are identified by Griliches (1994, and 1995 ), the hypothetical effect on the

    weighted average rate of labor productivity of shifting the output shares in the way that they

    actually moved can be calculated readily. There is certainly a gap in the manhours productivity

    growth rates favoring the better-measured, commodity producing sections, but the shrinkage inthat sectors relative size was too small to exert substantial downward leverage on the aggregate

    productivity growth rate. The simple re-weghting of the trend growth rates lowers the

    aggregate labor productivity growth rate by 0.13 percentage points between 1947-1969 and1969-1990, but that represents less than 12 percent of the actual slowdown that Griliches wasseeking to explain.

    18

    17 See Griliches (1995 [Chapter 10]:Table 2), for the underlying NIPA figures from which the shares in the

    total private (non-government) product were obtained. The averages in the text were calculated as geometric means of

    the terminal year values in each of the two intervals.18

    The gap betwen the measured and the hard-to-measure sectors long term average rates of real output per

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    18/36

    18

    manhour amounted to about 1.40 percentage points per annum, but it was smaller than that during 1947-1969 period

    and widened thereafter. The more pronounced post-1969 retardation of the average labor productivity growth rate

    found for the hard-to-measure sector as a whole was thus is responsible in a statistical sense for a large part of the

    retarded growth of the aggregate labor productivity. But, as will be noted below, it would be quite misleading to

    suggest that every branch of activity within the major sectors labelled hard to measure by Griliches participated in

    the slowdown, and that the industries comprising the measured sectors did not.

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    19/36

    19

    A somewhat different illustrative calculation supporting the same conclusion has beencarried out by Abramovitz and David (1998). For that purpose they begin by accepting the

    finding of the recent report of the Advisory Commission to Study the Consumer Price Index,

    the so-called Boskin Commission (1997), that the measurement biases affecting that index nowtends to overstate current and prospective annual rates of price increase by 1.1 percent points on

    average, and that a plausible range around this upward bias extends from 0.8 to 1.6 percentage

    points. For the sake of argument, it can then be supposed that in the past the overall bias stoodat the high end of this range, at 1.6 points a year, that an equal overstatement was present in the

    price deflator for the U.S. gross private domestic product. Abramovitz and David then

    suppose, further, that the upward bias arose entirely from deficiencies in the price deflators usedto derived real gross productivy originating within the the group of hard-to-measure sectors

    identified by Griliches, which would imply that the over-statement of the growth of prices was

    much higher 1.1 percentage points a year in that sector. They then add a further assumption,

    namely that this large over-statement existed since the early post-WW II era in the case of thehard-to-measure sector, whereas prices and real output growth were properly measured for the

    rest of the economy. What is the upshot?

    Under the pre-suppositions just described, a simple calculation of the weighted bias --taking account of the increasing relative weight of the hard-to-measure sector in the value of

    current gross product for the private domestic economy, as before -- must find that the

    measurement bias for the whole economy grew more pronounced between the rapid-growthperiod, 1948-66 and the Slowdown period, 1966-89. But, once again, it is possible to put a

    measure to the extent of the hypthesized mismeasurement, and the answer found by

    Abramovitz and David (1998) is that is that only 12 percent of the slowdown in the labor

    productivity growth rate that actually was observed would be accounted for in this way.Although quite consistent with the preceding alternative quantitative assessment of the

    mismeasurement hypothesis based upon the structural shift away from commodity production,

    the assumptions underlying this version are extreme; this implies that the comparative minorimpact found on this score represents an upper-bound estimate.

    Were the foregoing not enough to warrant turning to examine other formulations of theapproach to understanding the productivity slowdown as an artefact of mismeasurement,

    Robert J. Gordon (1998a) recently has weighed in on the same point with the findings of some

    new calculations that use far more finely disaggregated data from the BLS. His findings

    establish that the slowdown in average real output per manhours has been a quite pervasivephenomenon, affecting many industries within the commodity producing groups as well as the

    services. Moreover, comparing pre-1973 with post-1973 rates of growth in output per at this

    more detailed industry level shows that the experience of a slowdown was not statisticallysignificantly more frequent among the industries that fall into the hard-to-measure categorythat it was among those classified as being well-measured.

    4.3 The Role of New Goods in Unmeasured Quality Change

    A rather different mechanism affecting the accuracy measured productivity is more

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    20/36

    20

    ubiquitous across industries, and may well have become more pronounced in its effect duringthe past two decades: the turnover of the product line arising from the introduction of new

    goods and services, and the discontinuation of old staples, that has been experienced widely

    among the advanced industrial economies. This is different from the shift of the product mixbetween sectors whose outputs are intrinsically easier or harder to measure. It is a measurement

    difficulty arising from the underestimation of the growth in new products of improved quality,

    vis-a-vis the older part of the product line, and consequently an understatement of the rate ofgrowth of the aggregate output of each branch of production in which this is taking place. A

    change in product mix of this sort not only would heighten the problem of unmeasured quality

    improvement, but might be in some part traceable to the effects of the emerging ICT regime.In that way it would be seen that the information revolution itself could be contributing to the

    appearance of a productivity slowdown, and hence to the creation of its own paradoxically

    weak impact upon macroeconomic growth. This is an intriguing line of argument that bears

    closer consideration.

    New information technologies, and improved access to marketing data are enabling

    faster, less costly product innovation, manufacturing process redesign, and shorter product life

    cycles. Diewert and Fox (1997) present some evidence from Nakamura (1997) on thefour-fold acceleration of the rate of introduction of new products in U.S. supermarkets during

    1975-92 compared with 1964-75. By combining this with data from Bailey and Gordon (1988)

    on the rising number of products stocked by the average U.S. grocery supermarket, it is possibleroughly to gauge the movement in the ratio between these flow and stock measures, thereby

    getting a better idea of the movement in the new product turnover rate. This indicates that in

    contrast with the essential stability prevailing between the mid 1960s and the mid 1970s, there

    was a marked rise in the new product fraction of the stock between 1975 and 1992: if only halfof new products were stocked by the average supermarket, the share they represented in the

    stock would have increased from about .09 to .46. With such an elevation of the relative entry

    rate of new products, the potential is raised for underestimation of average output growth dueto the delayed linkage of new to old price series.

    There is a further implication. The broadening of the product line by competitors may be

    likened to a common-pool/over-fishing problem, causing crowding of the product space,

    withthe result that even the reduced fixed costs of research and product development must be

    spread over fewer units of sales. Moreover, to the extent that congestion in the product space

    raises the expected failure rate in new product launches, this reinforces the implication that

    initial margins are likely to be high when these products first appear and will be found to havefallen rapidly in the cases of the fortunate few that succeed in becoming a standard item.

    During the 1970s, the Federal Trade Commission was actively interested in whether such product

    proliferation was a new form of anti-competitive behaviour and investigated the ready to eat breakfast cereal

    industry, see Schmalensee (1978).

    _

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    21/36

    21

    The mechanism of product proliferation involves innovations in both marketing and theutilisation of the distribution networks. Traditionally, the new product introduction involved

    the high fixed costs of major marketing campaigns and thereby high unit sales. These costs

    have been reduced by marketing strategies in which the existing mass market distributionsystem is being configured under umbrellas of 'brand name' recognition for particular classes of

    products (e.g. designer labels, 'lifestyle' brands, and products related to films or other cultural

    icons) or the reputation of particular retailers or service providers (e.g. prominent departmentstore brands or financial services provided by large retail banks). Although, in the U.S., the

    mass market distribution system was well-established early in this century, utilising it for

    product and brand proliferation was frustrated by the high costs of tracking and appropriatelydistributing (and re-distributing) inventory. These costs that have fallen due to the use of

    information and communication technologies. It is not surprising that a statistical system

    designed to record productivity in mass production and distribution should be challenged when

    the 'business models' of this system change as the result of marketing innovation and the use ofinformation and communication technologies.

    Of course, some progress has been made in resolving the computer productivity paradox

    by virtue of the introduction of so-called hedonic price indexes for the output of the computerand electronic business equipment industries themselves. These indexes reflect the

    spectacularly rapid decline in the price-performance ratios of such forms of capital, but, by the

    same token, they increased the measured growth of the computer capital services, which areintensively used as inputs in a number of sectors, including the services sectors. The implied

    rise in computer-capital intensity contributes the support the growth rate of labor productivity in

    those sectors, but in itself the substitution of this input for others does nothing for the

    multifactor productivity growth rate.

    Thus, correcting the excessiely deflationary bias of information technology price

    deflators, has done wonders for the growth rate of multifactor productivity in the computerequipment industy, and its effect has contributed to the revival of the manufacturing sectors

    productivity simply due to the growing weight carried by that industry, but the persitingly

    sluggish multifactor productivity growth rates elsewhere in the economy, wh including withinother branches of manufacturing where computer equipment has been diffusing, continues to

    belie the expectations and frustrate the hopes that the information revolution would profoundly

    enhance the productive capacity of the U.S. economy.

    5. Conceptual Challenges: What Are We Really Supposed to be Measuring?

    Beyond the technical problems of the way that the national income accountants are

    coping with accelerating product innovation and quality change lies several deeper conceptual

    issues. These have been always with us, in a sense, but the nature of the changes in theorganization and conduce of production activities, and particularly the heighten role of

    information -- and changes in the information state -- in modern economic life, may be bringing

    these problematical questions to the surface in a way that forces reconsideration of what are

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    22/36

    22

    measures are intended to measure, and how they actually relate to those goals.

    In this section we look at three issues of this sort. First some surprises appeared when

    economists sought to illuminate the macrolevel puzzle through statistic studies of the impact ofIT at the microeconomic level, using observations on individual enterprises peformance. This

    highlights the conceptual gap between task productivity measures, on the one hand, and

    profitabilityand revenue productivitymeasurements on the other. It will be seen that not only isthere an important difference, but the relationship between the two conceptual measures may

    itself be undergoing a transformation, as a result of the impact of the way IT is being applied in

    businesses. Next, we must consider the way we wish to treat investments in learning to utilizea new technology, and the implications this has for national income accounting. A major

    technological discontinuity, which is what the advent of a new general purpose engine often has

    been seen to be, may induce more than the usual increment learning processes which are not

    counted as production of intangible assets, and not recorded in the national income and productaccounts. If so, this has a potential to occasion a more serious distortion of our

    macroeconomic picture of the way resources are being used, and what is being produced.

    5.1 The Microeconomic evidence about productivity impacts of IT--excess or subnormal?

    Further light on the puzzle has been gained through the path-breaking research into the

    productivity impacts of IT that have been conducted at levels below that of the macroeconomyhas been pursued by Brynolfsson and Hitt ( see Ch. ?8, here, based on 1995, 1996; followed by

    Brynolfsson 1997) and Lichtenberg (see Ch: ?7 based on 1995, 1997). These studies make use

    of company level cross-section data. An initially puzzling feature of their findings was that

    the elasticity of product with respect to the use of IT capital by the company was so high that itimplied supernormally high, or "excess returns" to computer investments by the large firms

    represented in the cross section samples.

    This was a surprise, as it flew in the face of those who suggested that corporate America

    was wastefully pouring too much investment into the purchase of electronic office equipment,

    and implied that still more intensive use of suc equipment would be warranted. But it also wasanomalous when viewed against the apparent sluggish growth of productivity in some of the

    very same industries and sectors that were represented in these cross-sections. These puzzles

    were soon resolved by the recognition that what the cross-section production function analyses

    were measuring was the impact of computers not on task productivity, but, instead on revenueproductivity. The only way to render the outputs of the firms drawn from different industries

    comparable for purposes of statistical analysis of that kind was to measure they all in monetary

    units. The gap between big (cross-section) impacts gauged in revenue productivity terms, andweak (time series) effects gauged in terms of task productivity means that there could be goodprivate profit-making reasons for this investment that do not translate into the equivalently

    powerful social rate of returns, the latter being what is called for in assessing the contribution of

    computer capital to national product growth.

    In other words, the standard for gauging what represents "excess returns" on computer

    and office equipment investments must be adjusted for the rapid rate of anticipated depreciation

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    23/36

    23

    of capital value due to the falling price performance ratio. At the equilibrium point where therewill be indifference between installing new equipment and waiting to do so, the firm's expected

    net operating benefits per dollar of reproduction costs for these durable assets must equal the

    opportunity cost rate of return on the firm's outlay, plus the expected annual rate of assetdepreciation. If prices of equivalent performance computer equipment are declining at rates

    approaching 20 per cent per annum, then we should be thinking of excess pre-tax returns as

    starting somewhere above 30 percentage points.

    It also is the case that subsequent investigations along the same lines have found that

    there were additional correlative of IT intensity among firms, such as higher quality workers,and company reorganization programs. Taking those into account statistically led to a

    substantial elimination of the apparent excess of the estimated returns on IT capital vis-a-vis

    the returns on capital of other kinds. The significance of the latter findings will be considered

    more fully below.

    5.2 Leaving Out Learning Investment: Is the Scope of the NIPA Overly Narrow?The computer revolution involves a transformation of the tools used in productive work

    and thereby a dramatic change in the skills needed within the workforce. A very important

    component of the adjustments made to inputs in accounting for the total factor productivity

    residual was a more adequate account of the changing quality of the workforce. In particular,educational attainment has been taken as the main indicator of the increasing quality of the

    workforce. While it is undeniable that educational attainment has improved, it is also possible

    that this variable overstates the change in worker skills relevant to employment. Skills that

    must be acquired 'on the job' represent an investment in intangible capital to organisations.These investments are not capitalised and therefore appear as current labour costs in the

    national income accounts.

    For these investments in intangible capital to be relevant to the productivity slowdown

    and the computer productivity paradox, it would be necessary to demonstrate that the 'match'

    between educational attainment and relevant employment skills was falling over time. If it istrue that the real costs of these workers are higher to companies due to the extent of 'on the job'

    training necessary for their effective performance in increasingly information technology

    intensive work environments, the skills attributed to educational attainment, may be

    discontinuous, overstating the 'quality' of labour. To the extent that the relevant skills arenon-cumulative (e.g., 'specific' to a given generation of information systems), the intangible

    investments of companies will remain high and will also be difficult to address through reforms

    in the educational system.

    By comparison, for earlier eras, there is evidence that the match between education and

    skills was better. During the 1890-1919 period, secondary school expansion was able to keep

    ahead of the rising requirements for more educated workers and the skills conveyed insecondary school may have been more relevant to those needed in the workplace.

    Correspondingly, evidence recently developed by Goldin and Katz (1996) suggests that

    changes in the educational system during the 1920s supported the growth of the new high-skill

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    24/36

    24

    industries on the eve of the Great Depression. The rapid expansion of higher educationfollowing World War II has been identified by Nelson (1990) as a source of U.S. leadership in

    technology and productivity during the 1950s and 60s.

    If the link between educational attainment and labour skills deteriorated during the

    1970s and 1980s, this deterioration could be linked to both the productivity slowdown and the

    computer productivity paradox. The significance of this deterioration for 'mismeasurement'would, however, depend very much on the potential for its alleviation. If the mismatch is a

    persistent result stemming from rapid technological change and the need to 're-create' the

    complementary skills needed for employment in an information technology-intensiveworkplace, shortcomings in productivity growth are likely to persist indefinitely. This is, of

    course, consistent with the computer revolution sceptic point of view. On the other hand, if the

    investments that companies are making in information technology related skills are cumulative

    and complemented by the growing extent to which computer literacy is being incorporated insecondary and post-secondary education, the size of intangible investments made by companies

    should begin to decline with the surplus recorded as productivity improvement and, eventually,

    wages. If this interpretation is accurate, the 'cautious optimist' viewpoint would be vindicated.

    5.3 Can Conventional Measurement Scales Be Trusted in a "Weightless Economy"?

    Biases in measurement of output growth within the conventional NIPA framework mayhave been contributing to both the puzzling slowdown and generating the appearance of

    paradoxically weak impacts from computerization, but, the case remains to be made that this is

    the form of 'magic bullet' that would conclusively dispatch both puzzles. Existing

    candidates proposed for this role are deficient, in either on grounds of timing or magnitude.While it is possible to expand the list of measurement problems in a way that would conjoin the

    two sides of the puzzle, and thus contribute further to their resolution, the most promising

    approach along those lines involves a head-on challenge to the existing productivitymeasurement framework; it argues that the scope of what is considered by the official accounts

    to be the economy's "gross output" is overly narrow, and the a properly consistent realignment

    of the definitions of outputs and inputs would raise the growth rate of the former in relationshipto the latter. Moreover, it suggests that such a realignment is needed especially in view of the

    way that the information revolution has been affecting economic activity.

    This this challenge to the official statistics is worth examining more closely for if theirusefulness in tracking macroeconomic performance is deteriorating, they will become

    increasingly misleading as a guide in fiscal and monetary management; increased scope for

    variant readings of signals whose accuracy has been impeached by experience is bound tointroduce a greater degree of subjectivity in policy formation -- so that the public trust in thepersonal judgements of those in positions of responsibility for policy-making, rather than in

    the economic logic and empirical reproducibility of the bases for their judgements, must come

    increasing to the fore. At present, those, like Chairman Greenspan, who are 'cautiousoptimists' about the future returns on information technology investment may be hostages to

    fortune. A resurgence of inflation, perhaps induced by mechanisms unrelated to those

    discussed so far, would support the advice, if not necessarily the reasoning, of the computer

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    25/36

    25

    revolution sceptics.

    What does this line of approach have to add concerning the relation between

    productivity measurement and economic performance? Economists generally agree that afundamental standard for assessing economic performance is the improvement of social

    welfare. Conventional measures of productivity growth have the virtue of being

    unambiguously linked to welfare improvement, at least if one is prepared to ignore or otherwiseaccount for issues such as environmental externalities. This link may be summarised by the

    maxim that social welfare is increased by expanding output and that this welfare will be further

    increased if more "utility" can be obtained from the goods and services created by"transforming" inputs that represent the same or a smaller sacrifice of "utility." That

    formation parallels the traditional notion of productivity, or efficiency being gauged by the ratio

    of some physical measures of outputs to inputs. The shift to a "utility" metric, however,

    implicitly opens the door to a far wider scope for what should be counted as output, andcorrespondingly as inputs.

    Indeed, it might be thought begging the question to presuppose that the use of

    information technology necessarily carries very substantial conventional productivity gains atall, inasmuch as some of the central issues in the productivity paradox are acknowledged to

    revolve around the conceptual ambiguities and measureability of the economic impacts of the

    new computer and telecommunications technologies. The latter are historicallyunprecedented in the readiness with which they are finding applications in the

    information-intensive service sectors of the economy, where the notion of ept of "output of

    constant quality" itself is a slippery construct that continues to resist direct translation into

    simple, scalar measures of the task efficiency achieved in firms and industries. As has beenpointed out already, a substantial degree of uniqueness and context-dependence inheres in the

    very nature of information products. The intuitive and conventionalized association of the

    notion of "productivity" with the engineering concept of standardized-task-efficiency thus isbeing seriously challenged by the increasing intangibility and mutability of "outputs" and

    "inputs." Rather than being easily "pinned down" for quantification, the new so-called

    "weightless economy" should be seen to be floating loose in statistical space.

    A range of concrete examples may help convey a better appreciation of the connection

    between the expansion and increasing sophistication of "the information economy" and the

    suspicion that outmoded national accounting concepts may be more and more seriouslydistorting our perception of the economy's productive performance. Were two lawyers each to

    have drafted identical legal codicils, each to be attached to the will of two different clients, the

    task efficiency concept of productivity would have us ask whether the effort involved -- interms of the lawyers' time and office capital used -- was the same in the two instances. Werethat found to be the case, the levels of productivity in the two situations would be judged

    identical. On the other hand, the circumstances of the two clients might have been different, and

    the value of having an expert draft the codicil might correspondingly differ between them,which could reflect itself in a difference in the fees the clients were willing to pay. Ought this

    to be taken into account, thereby measuring the amount of "constant quality legal services" as

    being different in the two instances? Other things being the same, the latter procedure would

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    26/36

    26

    register a difference in the associated productivity measures?

    The source of the conundrum just posed is not simply that the "meaning" or, the

    economic utility of the product in question is dependent on characteristics of the consumers(which include their circumstances and subjective states); in some degree that is true in

    general. The national income accountant's troubles stem from the possibility that each such

    transaction in what, from the service-provider's viewpoint might be the same commodity item,can be separately provided and differently priced according to the legal needs of the client.

    If this is a complication at one point in time when consumers are heterogeneous, it is equally a

    problem when the circumstances of consumers change accross time. New forms of insurance,and new financial derivatives that facilitate portfolio hedging may reduce the financial risks to

    which people are exposed. The increased security is worth something , and its value is

    reflected in the customers' willingness to pay the commisions and premiums charged by the

    vendors of these assets. With changes in external circumstances, willingness of a givencustomer may be subject to change. Should we say that the providers of such insurance

    instruments have become "less productive" when the premium rates paid by the purchaser(s) is

    lowered -- say, because the volatility of financial markets has been reduced by closer

    integration and lower arbitrage costs, or because people with greater wealth have become morepsychologically tolerant of a given degree downside risk?

    In the same vein one might view the "quantity" of medical malpractice insurance"service" received by doctors (and hospital administrators) to be well-defined in economic

    welfare terms, measureable in principle by asking what real income increment that would

    compensate an individual practitioner for the withdrawal of a specified incremental amount of

    "cover." Do we then want to say that the volume of service represented by a given policy --and hence the "productivity" of the agent providing it -- has increased, or view it as remaining

    the same, when the insurer boosts the premium rate in response to an upward trend in

    jury-awarded damages to the plaintiff in suits alledging malpractice? Or should that changebe disregarded by being treated as a pure price increase rather than a "service-quality"

    increase?20

    20 Note that if the change in viewed as a price increase that exactly corresponded to the value of the

    increased actuarial risk assumed by the insurer, one might say that more "insurance service" had been provided by

    the policy in question. In making a productivity calculation, if at the margin the increased value to the customer of

    being relieved of incremental risk was matched by the increased cost to the insurer of assuming that risk (i.e., no profit

    occurring at the margin) then inclusion of the costs of riskbearing among the inputs should just offset the incresed

    output and there would be no recored gain in measured total input efficiency (multifactor productivity).

  • 8/13/2019 Digital Tech and the Productivity Pparadox

    27/36

    27

    Like the preceding examples, this exhibits the conceptual distance between theengineer's and the economists intuitive notions of what an economys productivity is

    measuring. The engineering conceptualization of productivity is that it measures the

    efficiency with which the provision of a standardized, producer-defined good or service iscarried out. By contrast, the economist is more disposed to think that an economy's

    "productivity" should be gauged by the ratio between the economic welfare ("utility", or "final

    user satisfaction") that is yielded by the goods and services produced, and a correspondingvaluation of the inputs used up by the production process. Although latter conceptual

    framework is flexible enough to permit its implementation for measurement purposes despite

    the presence of heterogeneous, and increasingly intangible outputs and input, the problemposed by the growth in the importance of "information-goods" on both the input and output

    sides of the production process is that the "transformations" that the latter involves are more and

    more of the (immaterial) sort that go on in people's heads. Hence, they are becoming less

    subject to the familiar class of constraints that material transformations are expected to obey.

    At least two implications may be drawn from the latter perception of the nature of the

    digital technology revolution. First, the proliferation of products that we have considered in

    Section 4.2 would seem to be at odds with a traditional understanding of the link betweenproductivity and welfare. More is being produced, but the 'more' in the case is not necessarily

    the same 'more' that would be measured by conventional productivity measures. What is being

    produced is greater diversity in the available array of goods and services, i.e., increased varietyof choice for the consumer. Now it has long been recognised that the social welfare value of

    variety is ambiguous. Product differentiation may either increase or reduce social welfare,

    depending upon how consumers value the choices it provides, and the extent to which

    producers use the market power it provides to raise prices (incurring deadweight losses inconsumer surplus in addition to transfers from consumer to producer surplus). Thus, even if

    the hypothesised effects of marketing innovation and information technology-related

    efficiencies in distribution are established, their influence on social welfare cannot beestablished without further assumptions and evidence.

    Secondly, another, similarly ambiguous conclusion results from the discussion ofwhether we should expand measured output to include investments in "learning." In the

    situation at hand, such sacrifices of current conventional goods and services production are

    made in the course of upgrading organizational and individual labor skills, so that in the future

    it will be possible to exploit the full potential of the latest vintages of computer hardware andsoftware. If the information technology skills acquired by those who entered the labor force

    after 1970 are cumulative, social welfare is likely to improve as the average competence of

    employees is enhanced. This would be so, even if private enterprises might not be ableindividually to capture all the benefits of the upgrading the skills of their workers andmanagers. On the other hand, if the decision to introduce new equipment, and new computer

    programs is seen as having de-skilled individuals and organizations, by rendering their

    technology-specific competence obsolete, the matter become more problematic. If humancapital and and organizational competence were to go on being rendered obsolete at

    something close to the same pace at which information technology relevant capabilities are

    being acquired, it would be fair to conclude that both the productivity and welfare promises of

  • 8/13/2019 Digital T