€¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping...

78
Prepopulating Audit Workpapers with Prior Year Assessments: Default Option Effects on Risk Rating Accuracy Sarah Bonner University of Southern California Tracie Majors University of Southern California Stacey Ritter University of Southern California May 2018 Accepted by Phillip Berger. This study was supported by a Center for Audit Quality and Auditing Section of the American Accounting Association Access to Audit Personnel Award. We thank Margot Cella and Tom Payne of the CAQ for facilitating access to participants, and participants for giving their time. We thank Taylor Reis for research assistance, and Joseph De La Torre, Sharon Kim, Taylor Reis,

Transcript of €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping...

Page 1: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

Prepopulating Audit Workpapers with Prior Year Assessments:Default Option Effects on Risk Rating Accuracy

Sarah Bonner

University of Southern California

Tracie Majors

University of Southern California

Stacey Ritter

University of Southern California

May 2018

Accepted by Phillip Berger. This study was supported by a Center for Audit Quality and Auditing Section of the American Accounting Association Access to Audit Personnel Award. We thank Margot Cella and Tom Payne of the CAQ for facilitating access to participants, and participants for giving their time. We thank Taylor Reis for research assistance, and Joseph De La Torre, Sharon Kim, Taylor Reis, McKenzie Storey, Demetrio Tacad, Jr., and Jason Wechter for assistance with an early version of the instrument. We thank Jeff Johanns, JungKoo Kang, Alison Kays, Suteera Pongtepupathum, Lori Smith, and Fiona Wang for assistance with the risk assessment task. We are thankful for helpful comments and suggestions from the anonymous referee, Eric Allen, Tim Bauer, Tim Brown, Deni Cikurel, David Erkens, Cassandra Estep, Kirsten Fanning, Brent Garza, Kamber Hetrick, Bright Hong, Kathryn Kadous, JungKoo Kang, Fiona Wang, Devin Williams, Amanda Winn, and Dan Zhou, as well as workshop participants at Emory University, Florida State University, Georgia Institute of

Page 2: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

Technology, University of Illinois at Urbana-Champaign, and University of Southern California.

Prepopulating Audit Workpapers with Prior Year Assessments:Default Option Effects on Risk Rating Accuracy

Abstract

Risk assessment is a critical audit task, as auditors’ accuracy therein affects audit effectiveness and financial reporting quality, as well as audit efficiency. We propose that risk assessment accuracy for client risks that have changed from the prior year is affected by the manner in which auditors access prior year risk assessments, specifically whether they face a default option created by the prepopulation of current year workpapers with those assessments. We find that auditors with prepopulated (versus blank) workpapers are less accurate for risks that have changed because they are more likely to stick with last year’s assessments, and also to work fast. We then show that auditor characteristics reflecting a preference for accuracy reduce, but do not eliminate, these effects. Finally, we provide exploratory evidence that sticking and working fast are associated with, respectively, motivated reasoning and superficial processing. Collectively, these findings suggest the critical need for an intervention, and also have implications outside auditing.

JEL codes: M41, M42, D80, D91

Keywords: Default options, prepopulation, prior year results, audit workpapers, risk assessment, audit effectiveness, audit efficiency, auditor characteristics

Page 3: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

1. Introduction

Risk assessment is a critical audit task.1 Auditors tend to approach this task by

referring to prior year risk ratings and evidence, and considering whether current year

client conditions indicate changes (Brazel and Agoglia [2004]). Because risk ratings

directly affect substantive testing (Knechel et al. [2013]), auditors’ accuracy at changing

ratings when conditions change affects audit effectiveness and financial reporting quality,

as well as audit efficiency (Allen et al. [2006]; PCAOB [2015]). Based on its inspections,

the PCAOB has voiced concern that auditors sometimes do not increase ratings when

risks increase (e.g., PCAOB [2015]), which can lead to insufficient substantive testing

and a higher probability of missing any material misstatements that exist. Firms are

concerned that auditors may not decrease risk ratings when warranted,2 leading to

unnecessary substantive testing. We posit that risk rating accuracy may be affected by a

previously unexplored, seemingly innocuous factor – the manner in which auditors

access prior year risk assessments, specifically whether they use current year workpapers

that are prepopulated with these assessments, and, therefore, face a default option.3

A default option is the choice alternative that goes into effect if a decision-maker

does not actively choose to the contrary (Thaler and Sunstein [2003]), e.g., if she does not

“uncheck” an already checked box on a form. Prepopulated workpapers create a default

option because if auditors do not take action, prior year risk assessments become current

year assessments. By contrast, non-prepopulated (blank) workpapers do not create a 1 “Risk assessment” involves – for each risk factor (e.g., complexity of revenue recognition) – choosing a rating to reflect the level of the risk of material misstatement posed by the client’s circumstances and providing evidence to explain the rating. Rating choices are made from a limited set of options such as “low,” “medium,” and “high,” and evidence consists of a description of the relevant circumstances. 2 Several partners with whom we spoke made this observation.3 Whether workpapers are prepopulated with prior year results varies in practice. For example, in our sample, 22% of auditors have experienced only prepopulated workpapers, 28% have experienced only non-prepopulated workpapers, and the remaining 50% have experienced a mix.

1

Page 4: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

default option; here, auditors likewise refer to prior year risk assessments to assess

changes, but they do so via other means (e.g., by viewing a separate file), and must

actively select current year ratings and document evidence.4 A large literature shows that

people are more likely to adopt an option (such as organ donation) when it is the default

than when it is an alternative or there is no default (McKenzie et al. [2006]). A key

explanation for these findings is that decision-makers respond as if default options carry

meaning (Brown and Krishna [2004]) – either that a policymaker prefers the default

option or finds it to be acceptable, even if not preferred (McKenzie et al. [2006]), or that

the default has value simply because it occupies the space in which the final choice will

appear (Dhingra et al. [2012]). In other words, contrary to the classical economics view

that people have established preferences that are unaffected by extraneous factors,

decision-makers often respond as if they are constructing preferences based on the

presence, or particular value of, a default option (e.g., Slovic [1995]).

Based on this literature, we predict that, on average, auditors with prepopulated

workpapers will be less accurate at changing risk ratings when warranted than auditors

with non-prepopulated workpapers.5 We predict that this effect will occur because

auditors respond as if they believe: (1) it is acceptable to stick with the default of last

year’s risk rating, e.g., if uncertain about the appropriate rating for this year, and (2) their

firm has a preference for efficiency, so they should work fast on the task. A stick-with-

last-year response to prepopulation clearly will reduce rating accuracy for risks that have

changed. While unrelated to the value of the default option per se, working fast also

4 In other words, the non-prepopulated workpaper setup does not remove prior year risk ratings and evidence; doing so would be impractical given that auditors have recurring roles on engagements.5 It is not possible to examine the “accuracy” of documented evidence. Instead, we use this evidence to measure auditors’ cognitive processes.

2

Page 5: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

likely will reduce accuracy for risks that have changed because auditors doing so can

miss information that suggests ratings be changed.6

The default options literature, however, tends to examine choices such as organ

donation that do not have a “right answer.” By contrast, there is a right answer for current

year risk ratings – whether they change in the correct direction when risks have changed,

or do not change when risks have not changed. Thus, while our hypotheses focus on

whether auditors’ on-average responses to prepopulation are as predicted by the extant

literature, an important part of our study is examining the effects of auditor characteristics

related to a preference for accuracy that could reduce negative effects of prepopulation on

rating accuracy. Finally, because interventions for negative effects of prepopulation could

be targeted toward the behavioral responses (sticking and working fast) or to the

cognitive processes underlying those responses, we conduct exploratory analyses within

the prepopulation condition to examine the relation of the behavioral responses to

specific cognitive processes, i.e., motivated reasoning and superficial processing.

We conduct an experiment in which we assign 117 staff auditors from two firms

to one of two conditions – current year workpapers that are prepopulated (contain the

prior year’s risk assessments) or non-prepopulated (blank). In both conditions, auditors

receive client information and a separate file with the prior year risk assessments, then

provide current year risk ratings and evidence for 19 risk factors, eight of which have

increased from the prior year, six of which have decreased, and five of which have not

changed. We measure risk rating accuracy based on whether participants move their

ratings up from the prior year for increasing risk factors, down for decreasing factors, or

6 We also predict that a stick-with-last-year response will lead to higher rating accuracy for unchanged risks.

3

Page 6: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

not at all for unchanged factors. We measure stick-with-last-year responses using the

number of times auditors select current year ratings that are the same as the prior year’s

for risk factors other than those evaluated in the dependent variable. We measure the

work fast response using time spent on the task.7 Finally, we measure motivated

reasoning and superficial processing using participants’ documented evidence.

As predicted, auditors with prepopulated workpapers are, on average, less

accurate in their ratings for risk factors that have changed, but more accurate in their

ratings for risk factors that have not changed. For risks that have changed, lower accuracy

occurs due to both stick-with-last-year and work fast responses. For unchanged risks,

prepopulation leads to higher accuracy because the positive effect of sticking dominates

the negative effect of working fast. However, three auditor characteristics reduce the

negative effect of prepopulation on accuracy for risks that have changed. Auditors higher

in professional identity – the overlap between the attributes and values of the auditor with

those of the profession (Bamber and Iyer [2002]) – are less likely to stick, suggesting that

these auditors have a greater preference for task accuracy. Auditors who have more

replenished resources for self-control, and are thus more able or willing to pursue an

accuracy preference (Baumeister et al. [1998]), are less likely to stick, as well as less

likely to work fast, in response to prepopulation. Auditors with infrequent exposure to

prepopulated workpapers on audit engagements also are less likely to respond to

prepopulation by sticking, suggesting that frequent exposure to defaults can create a

preference to stick that is stronger than any accuracy preference. Robustness tests show

7 Auditors may spend different amounts of time in the two conditions due to differential typing requirements for the evidence portion of the assessment task. Non-prepopulated workpapers require that auditors type evidence “from scratch,” while prepopulated workpapers allow them to just type changes. We adjust our time measure to remove differences due to typing time.

4

Page 7: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

that results cannot be attributed to inattention or knowledge effects. Finally, exploratory

analyses within the prepopulation condition show that sticking and working fast are

associated with, respectively, motivated reasoning and superficial cognitive processing.

Our findings have implications for both audit research and practice. We identify a

previously unexamined factor that may explain inaccurate risk ratings (Allen et al.

[2006]). Specifically, prepopulation of workpapers with prior year assessments leads

auditors not only to be more likely to stick with prior year ratings but also to work fast in

the risk assessment task, and thus be inaccurate when risks change. For increasing risks,

inaccurate ratings can undermine audit effectiveness and financial reporting quality. For

decreasing risks, although prepopulation can increase efficiency during audit planning,

inaccurate ratings that are due to sticking paradoxically can lead to foregone efficiency

gains in substantive testing. Moreover, if ratings are inaccurate as to level, even if client

risks do not change, prepopulation can lead to continued inaccuracy. As “choice

architects,” then, firms using prior year assessments as defaults may be inadvertently

nudging auditors toward undesired choices (Thaler and Sunstein [2008]), suggesting the

critical need for an intervention other than those such as training currently in place

(PCAOB [2015]). Firms may wish to target the behavioral responses to these defaults by

eliminating defaults or by developing “smart” defaults (Smith et al. [2013]) that consider

current year client conditions. Alternatively, if research were to identify benefits of these

defaults beyond increased accuracy for unchanged risks and more efficient completion of

risk assessment, firms might consider retaining them but target the troublesome cognitive

processes, e.g., ask auditors to consider why prior year ratings may be incorrect.

5

Page 8: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

We also contribute to the defaults literature in psychology and economics (e.g.,

Kahneman [2003]; Just [2014]). First, we show that default effects can occur for choices

that have a “right answer,” such that a preference for accuracy should come into play.

However, we show that auditors who likely have a stronger preference for accuracy or

are better able to exercise such a preference are less susceptible to default effects. These

moderators could be important in other contexts where defaults are used and choice

accuracy is important, e.g., medicine. Second, we show that defaults can carry context-

specific meaning in addition to the general meaning that they can or should be retained.

Third, our finding that motivated reasoning and superficial processing contribute to

default effects adds to recent evidence that specific cognitive processes related to

inaccurate choice may underlie responses to defaults (e.g., Steffel et al. [2016]).

Finally, despite extensive research in psychology and economics, this is the first

study in accounting of which we are aware that examines default effects, and our results

have ramifications outside of auditing. For example, our findings suggest that

prepopulation in budgeting could explain the commonly observed behavior of adding the

same percentage to each of last year’s numbers, a behavior that ignores account-specific

factors (Economist [2009]). Future research also could examine whether the stickiness

observed in other accounting settings, such as earnings targets for compensation

(Indjejikian et al. [2014]) and debt covenant terms (Kahan and Klausner [1997]), is

caused, at least partially, by prepopulation.

The rest of the paper is organized as follows. Section 2 provides background

information and develops our hypotheses. Sections 3 and 4, respectively, describe the

design and results of the experiment. Section 5 concludes.

6

Page 9: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

2. Background and Hypothesis Development

2.1 BACKGROUND

Risk assessment is a critical audit task in that it directly affects substantive testing

and, thus, audit effectiveness and efficiency (Allen et al. [2006]; Knechel et al. [2013];

PCAOB [2011ab, 2015]). The task requires auditors, for each risk factor (e.g., complexity

of revenue recognition), to indicate a level of risk from a limited set of choices (e.g.,

“low,” “medium,” or “high”) and document evidence explaining their rating. Auditing

standards (PCAOB [2010]) encourage use of prior year information for risk assessment,

and auditors typically approach the task by referring to prior year assessments and

considering whether current year conditions indicate changes (Brazel and Agoglia

[2004]). However, regulators and firms have expressed concern that auditors sometimes

do not change risk ratings when client risks have changed (Church and Shefchik [2012];

KPMG [2011]; PCAOB [2008, 2010, 2011ab, 2012, 2015]). If auditors do not revise risk

ratings upward from the prior year when risks increase, current year ratings will be too

low, and auditors can plan insufficient substantive tests, imperiling audit effectiveness

and financial reporting quality. If auditors do not revise risk ratings downward from the

prior year when risks decrease, current year ratings will be too high, and auditors can

plan unnecessary substantive tests and decrease audit efficiency. Thus, although auditors’

substantive testing responses to changes in risk ratings have improved over time (Allen et

al. [2006]; Fukukawa et al. [2011]), effectiveness and efficiency likely continue to suffer

if auditors do not change risk ratings in response to changes in client conditions.8

8 Further, if risk ratings initially are inaccurate as to level, they can continue to be inaccurate even if conditions do not change, and substantive test responses can continue to be inappropriate from year to year.

7

Page 10: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

We examine auditors’ accuracy at identifying whether risks have increased,

decreased, or are unchanged from the prior year. We predict that a factor reducing

accuracy when risks change is the manner in which auditors refer to prior year risk

assessments, specifically whether they do so in current year workpapers that are

prepopulated with those assessments, thereby facing a default option. While numerous

factors, including elements of decision support systems, can affect risk assessment

accuracy (Allen et al. [2006]), no prior research has examined the effects of the default

option created by prepopulating workpapers with prior year results.

2.2 LITERATURE REVIEW AND HYPOTHESIS DEVELOPMENT

In this section, we first review literature on default options, which leads to our

first two hypotheses: effects of the default option created by prepopulation of workpapers

with prior year risk assessments on auditors’ accuracy for changed and unchanged risks.

Then, we review literature on why default effects occur and develop hypotheses about

how we expect auditors will respond to prepopulated workpapers, and the effect of these

responses on risk rating accuracy.

2.2.1. Hypotheses for Effects of The Default Option Created by Prepopulation on Auditors’ Risk Rating Accuracy

A default option is the choice alternative that goes into effect if a decision-maker

does not actively choose to the contrary (Thaler and Sunstein [2003]). Default options are

prevalent in many settings, and three general streams of literature address their effects.

First, studies examine settings where the default is to make a certain choice and people

have to actively choose to “opt out,” or the default is to not make that choice and people

have to actively choose to “opt in.” For example, some countries have laws that specify a

person becomes an organ donor at death, while other countries make the default that the

8

Page 11: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

person does not become a donor at death unless she indicates to the contrary (Johnson

and Goldstein [2003]). This literature shows that people are more likely to adopt an

option when it is presented as the default (and they have to opt out) than when not

adopting that option is the default (and they have to opt in) (e.g., McKenzie et al. [2006]).

A second stream of literature shows that the specific option designated as the

default (e.g., Product A versus B) also has an effect on choice. For example, Dinner et al.

[2011] find that being designated as the default increases the likelihood of a specific light

bulb being adopted by a consumer comparing energy-efficient versus traditional, cheaper

options. Similarly, Brown and Krishna [2004] show that designating a particular attribute

of a product (e.g., large amount of hard drive space) as the default increases the

likelihood the attribute is included in the hypothetical purchase of a customized product.

A third stream of literature shows people are more likely to adopt an option when

it is the default versus when there is no default (and they must actively choose among

alternatives). For example, studies have found that the presence of a default retirement

plan option increases adoption of that option (Madrian and Shea [2001]; Thaler and

Benartzi [2004]). Brown and Krishna [2004] show that consumers, when considering

levels of product attributes (e.g., large versus small monitor), more frequently prefer a

particular level when it is presented as a default, compared to when there is no default.

That default options affect choice goes against the classical economics view that

choice is guided by well-developed preferences that are invariant to transient, extraneous

factors such as whether a website requires opting in or out of receiving emails (Friedman

[1953]). That is, if a person prefers not receiving emails, in the former (latter) case, she

would not opt in (would opt out) (Johnson et al. [2002]). Instead, default option effects

9

Page 12: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

are consistent with decision-makers constructing preferences in response to such factors

(e.g., Payne et al. [1992]; Slovic [1995]) – in this case, the presence, or specific value, of

a default option. Similarly, we expect that the presence of a default option in current year

workpapers – one that is due to prepopulation – will affect auditors’ risk ratings.

Prepopulation creates a default option for this year’s ratings (and evidence) – last year’s

ratings (and evidence). Non-prepopulated workpapers do not contain a default option;

here, while auditors likewise refer to prior year assessments to determine if there should

be changes therefrom, they must actively select current year risk ratings and document

evidence. We predict that auditors facing prepopulated workpapers, then, will be more

likely than those facing non-prepopulated workpapers to adopt the default option (last

year’s rating) for a given risk factor.

This prediction leads to our hypotheses for the effects of prepopulation on risk

rating accuracy, measured based on directional correctness. To accurately respond to

risks that have increased (decreased) from the prior year, auditors must give higher

(lower) risk ratings this year. If prepopulation increases the likelihood auditors will select

last year’s rating, then auditors facing prepopulation will be less likely to provide

accurate current year ratings for changing risks:

H1: Prepopulation will negatively affect risk rating accuracy for changing risks.

Conversely, to accurately respond to risks that have not changed from the prior year,

auditors must leave unchanged the risk ratings from the prior year. Prepopulation, by

increasing the likelihood of choosing last year’s ratings, then, should lead auditors facing

prepopulation to be more accurate for unchanged risks:

H2: Prepopulation will positively affect risk rating accuracy for unchanged risks.

10

Page 13: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

There is considerable tension for the above hypotheses. First, most default option

studies examine choices for which there is no “right answer” (e.g., whether to donate

organs); these choices necessarily are based on personal preferences. By contrast, there is

a right answer for risk ratings – whether they change in the correct direction when risks

change or do not change when risks have not changed. Second, in studies showing that an

option is selected more when it is the default (versus when there is no default), the default

condition presents one “singled out” option, but the no-default condition contains no

“singled out” option (Carroll et al. [2009]). In our setting, the prior year’s rating is the

singled out option irrespective of whether workpapers are prepopulated; thus, there may

be no incremental effect of prior year ratings being embedded as defaults in prepopulated

workpapers. Such a finding would be consistent with prior research showing few changes

to risk ratings from year to year (e.g., Mock and Wright [1993, 1999]), and more

generally, the “same as last year” effect in auditing (e.g., Brazel et al. [2004]).

2.2.2 Hypotheses for Responses that Mediate the Effects of Prepopulation on Risk Rating Accuracy

While there is strong evidence that people more frequently adopt an option when

it is presented as the default, the theory as to why people do so is muddier given a relative

dearth of empirical evidence. We focus on the explanation most relevant for auditing:

people act as if a default option carries meaning (Brown and Krishna [2004]).

One way in which a default option can carry meaning is that decision-makers

(consciously) believe a policymaker has selected the default because that policymaker

finds value in the default, i.e., either prefers the default option or at least considers it to be

acceptable. These beliefs then lead decision-makers to construct preferences and choose

in line with those preferences. Studies examining this idea ask decision-makers to self-

11

Page 14: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

report their beliefs about the reasons underlying the policymaker’s selection of the

default. For example, in settings of organ donation (McKenzie et al. [2006]), retirement

plan enrollment (McKenzie et al. [2006]; Tannenbaum and Ditto [2011]), and course

deadline options (Tannenbaum and Ditto [2011]), studies have found that people report

that policymakers have chosen a default because they think decision-makers should

choose that option, and also that policymakers themselves would choose the default.

A second way in which a default option can carry meaning is that people

(nonconsciously) believe it is appropriate, or has value, simply because it occupies the

space in which the final choice will be documented (here, a cell in a spreadsheet). Studies

examining this idea (Dhingra et al. [2012]; Cappelletti et al. [2014]) show that decision-

makers facing defaults appear to act similarly to people viewing any information whose

truth value they must ascertain – they first believe the information to be true, even if told

it is false, and then must work (again, nonconsciously) to “unbelieve it” if it is false

(Gilbert et al. [1990]). In this scenario, it is unclear whether people have conscious

beliefs about reasons underlying the policymaker’s selection of the default.

Based on these literatures, we expect that auditors viewing prepopulated

workpapers that make prior year risk assessments the default will be more likely than

those viewing non-prepopulated workpapers to respond in two ways. We expect both

responses to mediate the effects of prepopulation on auditors’ risk rating accuracy for

changing and unchanged risks. Figure 1 displays our theoretical model.

[Insert Figure 1 about here]

First, we predict that auditors facing prepopulated workpapers are more likely to

stick with last year’s ratings. This may occur because they (consciously) believe a

12

Page 15: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

prepopulated workpaper setup communicates that the policymaker (i.e., the firm) finds

sticking acceptable (e.g., when uncertain about this year’s rating). On the other hand, a

sticking response could reflect an initial (nonconscious) belief that the default rating is

appropriate, simply because it appears in the space in the workpaper in which this year’s

rating will be shown. Overall, a sticking response should lead auditors with prepopulated

workpapers to exhibit reduced (greater) accuracy for changing (unchanged) risks.

The second expected response to prepopulation is specific to the auditing context

and unrelated to the value of the default option per se. We expect that auditors facing

prepopulated workpapers will work faster on the risk assessment task because they

(consciously) believe the firm chose this workpaper setup to focus on efficiency (e.g.,

since it can decrease typing time); working fast is a key way of achieving efficiency.9 A

work fast response should have a negative effect on accuracy for changing risks, as

auditors may miss information suggesting that risks have changed from the prior year.

H3: Prepopulation will negatively affect risk rating accuracy for changing risks because of the negative effects of the stick-with-last year and work fast responses.

The work fast response also may lead to reduced accuracy for unchanged risks, as

auditors may not take the time to compare current year information to the prior year’s.

However, we expect that the positive effects of the stick-with-last-year response will

dominate because the prior default options literature shows strong stick-with-the default

effects. This leads to the following hypothesis:

H4: Prepopulation will positively affect risk rating accuracy for unchanged risks because the positive effect of the stick-with-last-year response will dominate the negative effect of the work fast response.

9 In a subsequent study, we found that Masters of Accounting students spontaneously (and consciously) generate a belief that efficiency is a key reason underlying prepopulation. When asked why the firm would choose this structure, 85 percent of those with prepopulated workpapers listed efficiency as a reason.

13

Page 16: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

3. Method

3.1 PARTICIPANTS

Participants were 117 staff auditors from two firms with a mean of 12 months of

experience who completed the study during a firm training session.10,11 As described

further below, participants complete a risk assessment task. Staff auditors are appropriate

for the task used in this study as it requires only that participants determine if there are

changes in the values of information cues (i.e., the ratings for the risk factors). Prior

research indicates that this cue measurement component of tasks requires less experience

than the cue selection or cue weighting components (Bonner [1990, 1991]).12

3.2 DESIGN

We utilize a 1 x 2 between-participants design.13 The manipulated factor is

whether current year workpapers contain a default option, i.e., are prepopulated with the

prior year’s results. In the prepopulated condition, participants view a current year

workpaper with last year’s ratings and evidence filled into the spaces for this year’s

ratings and evidence. If auditors wish to make changes from the prior year rating, they

must change that rating to reflect the desired current year rating. If they wish to change

evidence, they may either modify last year’s evidence or completely delete the prior

year’s evidence and fill in new evidence. However, if they stick with last year’s

assessment, they can do nothing to the rating and evidence. In the non-prepopulated

condition, participants view a current year workpaper with blank spaces both for risk

10 There were no effects due to firm.11 Experience ranged from 1.5 to 60 months. All conclusions hold after dropping outliers as to experience. 12 Moreover, the participating firms also considered the task to be appropriate for staff auditors. 13 Auditors also were assigned to one of three guidance conditions (two of the conditions provided differing forms of PCAOB guidance cautioning against overreliance on prior year results). As guidance had little effect on auditors’ choices, we collapse our analyses across guidance conditions.

14

Page 17: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

ratings and evidence. Irrespective of whether they stick with prior year ratings and

evidence or make changes, these auditors must actively select current year ratings and

document evidence. All participants receive the prior year workpaper in a separate file

that they view on their screens next to the current year’s workpaper.

The prepopulation manipulation also entails telling participants either that their

firm had copied over or not copied over the prior year risk ratings and evidence into the

current year’s workpaper. All participants also are told: “This is consistent with your

firm’s policy on how auditors should access prior year workpapers. Senior leadership in

the audit quality group at your firm chose this policy because they believe it strikes the

right balance between audit effectiveness and efficiency in performing the audit.” We

chose this language to ensure that participants’ beliefs about the default were focused on

effectiveness and efficiency rather than, for example, documentation. We included

language regarding the choice being made at the firm level to reduce noise created by

participants making their own inferences about the level at which the choice was made.14

3.3 TASK, DEPENDENT VARIABLES, AND MEASURES

The risk assessment task requires that auditors first read current year client

information and assume they are in a staff role on the engagement. They then receive an

excerpt from the relevant standard (PCAOB [2010]) to be consistent with practice and

ensure similar knowledge of the standard. Next, they view sample risk ratings and

evidence, and read the information above regarding the workpaper setup. Then, they

complete the task, which entails indicating a rating from 1 (low risk) to 5 (high risk) for

19 risk factors, and providing evidence to support each rating.

14 Discussions with auditors indicated that the choice can be made at the firm or engagement level.

15

Page 18: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

We measure risk rating accuracy based on whether participants change their

ratings in the right direction for increases (up from last year’s rating) and decreases

(down from last year’s rating), or do not change for unchanged factors.15 While our

hypotheses pertain to auditors’ accuracy for changed and unchanged risks, we employ

three dependent variables – number of correct increases, decreases, and “no changes.”

We do so because, as explained next, the measure for the stick-with-last-year response

for a given risk factor type is calculated using sticking for the other two risk factor types;

otherwise, there would be a mechanical association between this measure and the

accuracy dependent variable. In total, there are eight risk factors involving increases from

the prior year, six risk factors involving decreases, and five involving no changes.16

Our measures of auditors’ responses to default options are behavioral in nature.

Such measures are unobtrusive and, therefore, not leading (Libby et al. [2015]), and,

more important, they allow for the possibility that the default-based responses are at least

partially nonconscious.17 For the stick-with-last-year response, we calculate the number

of times auditors indicate a current year risk rating that is the same as the prior year risk

rating. We create this measure (Stick with Last Year) for each type of risk factor

(increase, decrease, no change) by totaling the number of no changes for the other types

of risk factors. For example, Stick with Last Year for increasing risk factors is the number

of times auditors indicate the prior year risk rating for decreasing and no change risk

15 We validated whether each risk increased, decreased, or did not change with four persons with auditing experience. However, as indicated by relatively low accuracy among non-prepopulated participants, there were two decreasing risk factors that appeared to be somewhat ambiguous. We confirm that all results are robust to an alternative dependent variable for decreasing risk factors that excludes these two risk factors. 16 For example, the Company faced the threat of a new, large competitor in the industry in the current year, which constituted an increase in the related risk factor. As an example of a decreasing risk, the Company in the current year resolved open, and potentially problematic, negotiations with union representatives. 17 Using behavioral measures also helps overcome the problem of social desirability bias that might occur were we to ask auditors to report their beliefs (assuming they are conscious). For example, auditors may be reluctant to report that the workpaper setup leads them to believe it is acceptable to stick with last year.

16

Page 19: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

factors. We measure the work fast response using the amount of time participants spend

on the risk assessment task. Because there could be differences in time spent due to the

fact that non-prepopulated workpapers require typing “from scratch,” whereas

prepopulated workpapers enable altering prior year evidence, we create a time measure

that is comparable across conditions (Work Fast). We subtract the time participants in the

prepopulated condition spend typing, then replace it with an estimate of the average time

auditors in the non-prepopulated condition spend typing.18

Participants also answer questions regarding other possible beliefs about the

firm’s choice to prepopulate.19 Finally, they respond to questions designed to capture

auditor characteristics that are relevant to a preference for accuracy and that therefore

may act as moderators, i.e., reduce the negative effects of prepopulation. Specifically, we

measure professional identity, sufficiency of self-control resources, and frequency of

exposure to prepopulated workpapers on audit engagements.

4. Results

4.1 TESTS OF HYPOTHESES RELATED TO PREPOPULATION

Our primary dependent variable is participants’ risk rating accuracy, i.e., for each

type of risk factor, the number of times responses are directionally correct (labeled

18 To make this adjustment, we required an average typing time for staff auditors. We asked ten staff auditors who did not participate in this study to type a passage from our case study, and calculated an average typing time of 69 words per minute. We then calculated the amount of time participants in the prepopulation condition spent typing by dividing the number of new words of evidence they documented (i.e., incremental to the prior year’s evidence that was prepopulated in the workpaper) by 69, and subtracted this estimate from their total time on the risk assessment task. Next, we divided the average number of words of evidence typed by participants in the non-prepopulated workpaper condition by 69, and added this estimate to the time for all participants in the prepopulated workpaper condition. 19 Prepopulation does not affect auditors’ beliefs that the workpaper structure was done to fulfill PCAOB documentation requirements, to reflect the fact that client risks change little from year to year, or for training purposes (smallest one-tailed p = 0.230).

17

Page 20: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

“Accuracy for ___ Risk Factors”). Descriptive statistics for the dependent variable and

the mediators are reported in Table 1, Panels A and B.

Table 1, Panel C reports results of an Analysis of Variance using a dichotomous

independent variable indicating whether or not participants’ workpapers are

prepopulated.20 In support of H1, there is a negative effect of Prepopulation on Accuracy

for Increasing (F(1,115) = 65.11; one-tailed p < 0.001) and Decreasing Risk Factors (F(1,115)

= 78.43; one-tailed p < 0.001). Supporting H2, there is a positive effect of Prepopulation

on Accuracy for No Change Risk Factors (F(1,115) = 51.27; one-tailed p < 0.001). Next, we

test our hypotheses about auditors’ responses to prepopulated workpapers, and how these

responses mediate the relation between prepopulation and risk rating accuracy. We test

these hypotheses using structural equation modeling (Byrne [2016]), as well as the bias-

corrected bootstrapping method (Preacher and Hayes [2008]).

4.1.1 Model for Increasing Risk Factors

Figure 2 presents the empirical model, and illustrates how Stick with Last Year

and Work Fast jointly mediate the effect of Prepopulation on Accuracy for Increasing

Risk Factors.21 The structural equation model fits the data well. The chi-squared test

reveals good fit (χ2(1) = 0.16, p = 0.690), as do other standard measures. The Comparative

Fit Index (CFI) of 1.00 is above the threshold of 0.95 (Hu and Bentler [1999]), and the

Root Mean Square Error of Approximation (RMSEA) of 0.01 is below the 0.05 threshold

for good fit (MacCallum et al. [1996]). As demonstrated by the path coefficients in the

model, there are positive effects of Prepopulation on Stick with Last Year (Link 1; p <

20 Tests of assumptions show that the data do not meet the ANOVA assumptions of a normal distribution and homogeneity of variance. Therefore, we also conduct nonparametric tests (i.e., Independent-Samples Median tests). Further, since the dependent variable utilizes count data, we use Poisson models to confirm results. All of these analyses reveal consistent results.21 For completeness, we include a direct path from prepopulation to accuracy in these models.

18

Page 21: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

0.001) and Work Fast (Link 2; p = 0.032), as expected. Also, as expected, there are

negative effects of Stick with Last Year (Link 3; p < 0.001) and Work Fast (Link 4; p =

0.001) on Accuracy for Increasing Risk Factors. Finally, there is a direct, negative effect

of Prepopulation on Accuracy for Increasing Risk Factors (p = 0.029).

[Insert Figure 2 about here]

Consistent with Griffith et al. [2015], we use the bias-corrected bootstrapping

method (Preacher and Hayes [2008]) to formally test for mediating effects because

simulation tests indicate that it is the most accurate method for sample sizes less than 500

(Hayes and Scharkow [2013]).22 The findings are consistent with those from the structural

equation model. Specifically, the 90% bias-corrected confidence interval for Stick with

Last Year is (-2.65, -1.57), showing the negative effect of prepopulation on accuracy

through the stick-with-last year response. The confidence interval for Work Fast is (-0.43,

-0.04), supporting the negative effect of prepopulation on accuracy through the work fast

response. The direct negative effect of prepopulation on accuracy for increasing risks

suggests there are other mediators (e.g., Rucker et al. [2011]). For example,

prepopulation may lead auditors to be more concerned about the increased substantive

testing (i.e., reduced efficiency) ramifications of increasing risk ratings.

4.1.2 Model for Decreasing Risk Factors

Figure 3 illustrates the model for decreasing risk factors. Again, the model fits the

data well (χ2(1) = 0.67, p = 0.413; CFI = 1.00; RMSEA = 0.00). As expected, there are

positive effects of Prepopulation on Stick with Last Year (Link 1; p < 0.001) and Work

22 The Preacher and Hayes [2008] method tests for mediation by examining whether the coefficients for the paths from the independent variable through each mediator to the dependent variable are significantly different from zero. This method offers multiple advantages, including greater robustness to violations of assumptions (Hayes [2013]).

19

Page 22: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

Fast (Link 2; p = 0.032). Also as expected, there are negative effects of Stick with Last

Year (Link 3; p < 0.001) and Work Fast (Link 4; p = 0.001) on Accuracy for Decreasing

Risk Factors. Finally, there is a negative, significant direct effect of Prepopulation on

Accuracy for Decreasing Risk Factors (p < 0.001).

As above, we use the Preacher and Hayes [2008] method to test the significance

of the mediators’ effects. The 90% bias-corrected confidence intervals for Stick with Last

Year and Work Fast are (-1.33, -0.60) and (-0.26, -0.02). Overall, prepopulation reduces

auditors’ accuracy for decreasing risks through the stick-with-last-year and work fast

responses. Given that our sticking measure is derived from increasing and no change

factors, the direct negative effect of prepopulation on accuracy for decreasing risks may

reflect an incremental concern about changing for these risks, consistent with auditor

conservatism (e.g., Hoffman and Patton [1997]).

[Insert Figure 3 about here]

4.1.3 Model for No Change Risk Factors

Figure 4 illustrates the model for no change risk factors. The model fit is

relatively poor. While the CFI indicates good fit (CFI = 0.97), the chi-square test is

significant (χ2(1) = 5.94; p = 0.015) and the RMSEA of 0.20 surpasses the threshold of

0.10 for mediocre fit (MacCallum et al. [1996]).23

Despite the poor overall fit, the path coefficients in the model are significant and

in the expected directions. There are positive effects of Prepopulation on Stick with Last

Year (Link 1; p < 0.001) and Work Fast (Link 2; p = 0.032). Also, there is a positive

23 However, when the sample size is small, the RMSEA tends to over-reject (Byrne [2016]; Hu and Bentler [1999]). Further, several other fit statistics indicate adequate fit; the Goodness of Fit Index is 0.976, and the Standard Root Mean Square Residual is 0.058, indicating that the model explains the data to within an average error of 0.058 (Byrne [2016]).

20

Page 23: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

effect of Stick with Last Year on Accuracy for No Change Risk Factors (Link 3; p <

0.001), and a negative effect of Work Fast on Accuracy for No Change Risk Factors

(Link 4; p = 0.001). Finally, there is a direct, positive effect of Prepopulation on

Accuracy for No Change Risk Factors ( p = 0.008). The Preacher and Hayes [2008]

method shows that the mediators are again significant; the 90% bias-corrected confidence

intervals for Stick with Last Year and Work Fast are (0.99, 1.76) and (-0.37, -0.03).

Overall, prepopulation increases auditors’ accuracy for unchanged risks because the

positive effect of the stick-with-last-year response dominates the negative effect of the

work fast response. The direct positive effect of prepopulation on accuracy may reflect

the efficiency concern that would occur if auditors were, instead, to increase ratings.

[Insert Figure 4 about here]

4.2 ANALYSES OF THE MODERATING EFFECTS OF AUDITOR CHARACTERISTICS

As discussed earlier, most studies on default option effects concern choices for

which there are no right answers, whereas there are right answers for risk ratings. While

the above results suggest that the average auditor responds to default options consistent

with prior studies, auditors who have stronger preferences for accuracy and/or who are

better able to exercise such preferences may not. Thus, we examine whether three auditor

characteristics that are related to a preference for accuracy can reduce (or eliminate) the

negative effects of prepopulation on accuracy for risks that have changed: professional

identity, sufficiency of resources for self-control, and frequency of use of prepopulated

workpapers on engagements. We use two approaches to test for moderation. The first is a

comparison of two structural equations models – one that is unconstrained, and one that

constrains the links between prepopulation and the mediators to be equal across levels of

21

Page 24: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

the moderator. With this approach, there is evidence of moderation if the unconstrained

model has significantly better fit than the constrained model (Arbuckle [2016]). To have

the strongest tests of moderation, we create three groups for each moderator – low,

medium, and high – but use only the low and high groups to examine the moderator’s

effects. We also use the Preacher and Hayes [2007] moderated-mediation approach.

4.2.1 Professional Identity

First, we examine the moderating effect of professional identity, i.e., the extent to

which an auditor’s attributes and values overlap with those of the public accounting

profession (Bamber and Iyer [2002]). Theory on self-concept maintenance predicts that

auditors identifying more with the profession will consider their performance on the risk

assessment task an integral part of their self-concept (Britt [1999, 2005]) and have a

higher preference for accuracy in their risk rating choices. Thus, we expect these auditors

to be less likely to respond to prepopulation by sticking with last year’s ratings.24

We measure Identity by asking participants to select one of seven images of two

overlapping circles (reflecting the self and the profession), ranging from no overlap to

nearly overlapping (Bauer [2015]). A comparison of the unconstrained and constrained

structural equations models reveals that Identity moderates the effect of Prepopulation on

Stick with Last Year and Accuracy for Increasing (χ2(1) = 4.67, p = 0.031) and Decreasing

(χ2(1) = 4.35, p = 0.037) Risk Factors. The effect of Prepopulation on Stick with Last Year

is greater at the low level of Identity than at the high level, although it is significant at the

24 We do not predict (or find) a moderating effect of Identity on Work Fast. While auditors with strong Identity may be less likely to work more quickly in response to prepopulation, as it could reduce accuracy, they may also be more likely to do so. For example, they may believe they can achieve efficiency gains by working faster, but smarter.

22

Page 25: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

high level. The Preacher and Hayes [2007] approach also supports Identity as having this

moderating effect for increasing (0.07, 0.51) and decreasing (0.04, 0.24) risks.

4.2.2 Sufficiency of Resources for Self-Control

Second, we examine the moderating effect of the sufficiency of auditors’

resources for self-control in decision-making (Baumeister et al. [1998]). Self-control

resources can vary across auditors due to factors such as extent of sleep and multi-tasking

(Lanaj et al. [2014]; Vohs et al. [2008]). Theory predicts that auditors who are more

replenished (i.e., less depleted) as to self-control resources will be less likely to respond

to prepopulation by sticking with last year’s rating; these individuals are more likely to be

able (or wish) to override the sticking response (Baumeister et al. [1998]; Evans et al.

[2011]). Replenished auditors also may be less likely to work fast in response to

prepopulation; because replenished individuals do not need to conserve resources, they

are less averse to effortful approaches (Baumeister et al. [2000]).

We measure Replenishment using auditors’ (reverse-scored) self-reported

agreement (on a 1-7 scale) with the statement that they felt mentally overloaded when

trying to combine the current client information with the prior year risk ratings (Muraven

et al. [1998]). The unconstrained and constrained structural equations models are

significantly different for increasing (χ2(1) = 6.01, p = 0.049) and decreasing (χ2

(1) = 5.85,

p = 0.054) risks. The effect of Prepopulation on Stick with Last Year is lower at the high

level of Replenishment than at the low level (albeit consistently significant). Moreover,

the effect of Prepopulation on Work Fast is not significant at the high level of

Replenishment, but significant at the low level. The Preacher and Hayes [2007] approach

also supports Replenishment as a moderator of Stick with Last Year for increasing (0.03,

23

Page 26: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

0.54) and decreasing (0.06, 0.32) risks. Likewise, the confidence intervals are significant

for Work Fast as a moderator for accuracy for increasing (0.01, 0.29) and decreasing

(0.01, 0.17) risks. Overall, it appears that replenished auditors are more willing (and/or

able) to exert their available self-control resources to override the sticking response to

prepopulation, and do not work fast in response to prepopulation. By contrast, less

replenished (more depleted) auditors may use prepopulation as justification to work fast,

i.e., exert less effort to preserve remaining scarce cognitive resources.

4.2.3 Frequency of Use of Prepopulated Workpapers on Audit Engagements

Given variation in practice, the third moderator we examine is auditors’ frequency

of exposure to prepopulated workpapers on their audit engagements. There are two

theoretical possibilities for how frequency of exposure could influence auditors’

responses to prepopulation. Auditors who are frequently exposed to prepopulated

workpapers could respond more strongly because repeatedly responding to prepopulation

with specific behaviors could lead to those behaviors becoming de facto established

preferences. Alternatively, auditors who frequently use prepopulated workpapers may be

less inclined to draw meaning from the workpaper setup and instead, attribute the use of

prepopulation to factors such as “this is just the way things are done.”

We classify auditors based on whether they use all prepopulated (22 percent), a

mix of prepopulated and non-prepopulated (50 percent), or all non-prepopulated (28

percent) workpapers on their engagements (Use of Prepopulated Workpapers). The

unconstrained and constrained structural equation models are marginally different for

increasing risks (χ2(1) = 5.33, p = 0.070), and significantly different for decreasing risks

(χ2(1) = 7.07, p = 0.029). Within the unconstrained models (for both increasing and

24

Page 27: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

decreasing risks), the effect of Prepopulation on Stick with Last Year is smaller for

auditors who indicate using all non-prepopulated workpapers than for those who indicate

using all prepopulated workpapers (although the effect still is significant in the former

case). This result suggests that frequent exposure to defaults may lead to a de facto

preference to stick with the default that overrides any preference for accuracy. The

Preacher and Hayes [2007] approach also supports Use of Prepopulated Workpapers as a

moderator of Stick with Last Year for increasing (0.01, 0.93) and decreasing (0.06, 0.57)

risks.25 In other words, infrequent exposure to prepopulated workpapers can reduce the

negative effect of prepopulation on auditors’ risk rating accuracy.

To summarize, while default effects hold on average for a choice where accuracy

matters, auditor characteristics reflecting a preference for accuracy can reduce these

effects, suggesting an important boundary condition for theory on default options.

4.3 ROBUSTNESS TESTS

4.3.1 Inattention

A possible alternative explanation for our findings is that prepopulation causes

auditors to simply be inattentive to the task (Dhingra et al. [2012]). Thus, we redo all

analyses after dropping five participants we classify as inattentive. These five participants

both: (1) made no changes to any of the 19 risk ratings, and (2) were in the lowest tenth

percentile of time spent on the task. When excluding these participants, all results are

qualitatively the same. We conclude that inattention cannot explain our findings.

4.3.2 Knowledge of Risk Assessment

25 By contrast, both the SEM and Preacher and Hayes approaches reveal that the effect of Prepopulation on Work Fast is not significant for auditors who use all prepopulated workpapers, but significant for those never using prepopulated workpapers. Examination of means suggests this effect reflects auditors with all prepopulated workpapers working at a consistent pace irrespective of the type of workpapers they receive in the experiment. However, those never using prepopulated workpapers work more quickly when presented with prepopulated workpapers, perhaps because such workpapers give them justification to do so.

25

Page 28: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

A concern regarding our use of staff auditors may be that they lack knowledge for

the task. While random assignment should ensure that lack of knowledge does not drive

the effect of prepopulation, we nevertheless address this concern. We note first that

participants in the non-prepopulated condition have relatively high accuracy for changing

risk factors (75 percent for increasing factors, and 78 percent for decreasing factors once

two ambiguous factors are dropped). We also find that knowledge does not moderate the

effects of prepopulation, with one exception.26 For decreasing risk factors, having

performed a risk assessment marginally reduces, but does not eliminate, the negative

effect of prepopulation on accuracy through Stick with Last Year (χ2(1) = 5.21, p = 0.080).

We conclude that lack of knowledge cannot explain our findings, and that the negative

effects of prepopulation may persist for auditors with more risk assessment knowledge.

4.4 ANALYSES OF COGNITIVE PROCESSES

In this section, we provide exploratory analyses of cognitive processes that may

be associated with the behavioral responses to defaults. We do so because an intervention

could target either the behavioral responses or the cognitive processes underlying those

responses. We use participants’ documented evidence to create measures of the processes

of interest (separately for increasing and decreasing risks), and conduct our analyses

within the prepopulation condition as there is sufficient variation to do so.27

26 We capture knowledge with three measures: (1) the extent to which auditors agree that, for a given risk factor type, the firm wants the auditor to take the appropriate action, (2) whether the participant has performed a risk assessment on their engagements, and (3) months of audit experience. Consistent with our task requiring only cue measurement and staff auditors having the knowledge to do cue measurement, none of these variables is associated with risk assessment accuracy (smallest one-tailed p > 0.200). 27 We also do so because the evidence that participants typed has different implications under different workpaper setups (as we did not provide specific instructions pertaining to evidence). Because participants with prepopulated workpapers started with the evidence already present, evidence typed in that condition is more consistent in style than that in the non-prepopulated condition (in which styles ranged from brief and informal, using language like “no change,” to lengthier and more formal).

26

Page 29: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

First, we expect the sticking response will be associated with motivated reasoning

(Kunda [1990]). To support sticking, auditors may recruit arguments favoring the default

and/or explain away information that suggests changing (Dhingra et al. [2012]). We

measure the latter type of motivated reasoning using a count of transition words

indicative of counterargument (e.g., “while” or “even so”) (Brown and Krishna [2004];

Jain and Maheswaran [2000]).28 For example, one participant explained away facts

suggesting that the risk for revenue recognition complexity should increase as follows:

“While the client will introduce new restaurants for which revenue recognition will be

complex, we believe the client’s accounting staff is competent enough to handle it.”

Second, we expect that the working fast response will be associated with

superficial processing (Anderson and Reder [1979]). Specifically, we expect auditors

may process just enough information to meet a perceived minimum time requirement.

For example, they may skim current year information to “get the big picture,” instead of

reading thoroughly. We measure superficial processing using the number of words

participants type that are incremental to the already present prior year evidence. Fewer

new words indicates more superficial processing (Earley [2002]).

We create process models that explore whether each cognitive process is

associated with the expected default-option based response and, in turn, whether each

process reduces rating accuracy, as would be expected given these processes generally

are associated with lower quality judgment and choice (Bonner [2008]). Analyses for

both increasing and decreasing risks show that Stick With Last Year is positively related

28 We are not able to examine the search for only confirming information because we did not track auditors’ information search patterns.

27

Page 30: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

to Motivated Reasoning, and Motivated Reasoning is negatively related to accuracy.29

Analyses for both increasing and decreasing risks show that Work Fast is positively

associated with Superficial Processing, and Superficial Processing is negatively

associated with accuracy.30 Overall, these results provide preliminary evidence that the

responses to the default option created by prepopulation may affect rating accuracy for

changing risks by increasing the use of specific cognitive processes.31

5. Discussion and Conclusions

Auditors’ accurate assessment of client risks is critical to audit effectiveness and,

thus, financial reporting quality, as well as to audit efficiency, and regulators and audit

firms have expressed concerns about auditors’ accuracy at this task (Church and Shefchik

[2012]; KPMG [2011]; PCAOB [2015]). While auditors generally refer to prior year risk

assessments when making current year assessments, our study shows that the seemingly

innocuous manner in which they do so affects their risk rating accuracy. Auditors who

refer to prior year assessments in prepopulated current year workpapers and, therefore,

face a default option, have less accurate ratings for risks that have changed than do

auditors who refer to those assessments in a separate file because they are using non-

prepopulated workpapers. Further, we show that these negative effects occur because

auditors respond to the default option created by prepopulation by being more likely to

stick with last year’s ratings and work fast. The sticking response also leads to (likely

29 The Preacher and Hayes (2008) 90% bias-corrected confidence intervals for Motivated Reasoning are significant for increasing (-0.21, -0.06) and decreasing (-0.08, -0.03) risks.30 The 90% bias-corrected confidence intervals for Superficial Processing are significant for increasing (-0.06, -0.02) and decreasing (-0.03, -0.01) risks. 31 We are unable to conclude that the sticking response leads to motivated reasoning because we measure this response using behavior rather than a self-report. Instead, the processes could occur first and lead to sticking with the default. For the work fast response, it seems more likely that the processing occurs as a result of a belief that the default suggests working quickly is preferred.

28

Page 31: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

inadvertent) greater accuracy for unchanged risks. Exploratory analyses show that the

sticking and work fast responses are associated with specific cognitive processes, i.e.,

motivated reasoning and superficial processing.

Our findings suggest the critical need for an intervention in the area of risk

assessment given the importance to audit and financial reporting quality of accurate risk

ratings. If firms are unaware of the effects of prepopulating current year workpapers with

prior year risk assessments, current training and systems-based interventions (PCAOB

[2015]) likely are ineffective. Further, given that the PCAOB has specifically questioned

whether the use of another type of default (standardized lists of risk factors) is conducive

to accurately assessing risks (PCAOB [2015]), firms may wish to eliminate defaults of

prior year results entirely by using non-prepopulated workpapers. However, if firms wish

to use prior year risk assessments as defaults to obtain efficiency or other benefits, they

may consider developing “smart” defaults (Smith et al. [2013]) that take into account

current client conditions. Alternatively, firms wishing to maintain these defaults could

target the troublesome cognitive processes. For example, an intervention for motivated

reasoning could be to ask auditors to consider why prior year ratings may be wrong.

The need for an intervention that eliminates the effects is further underscored by

the results for our moderators that reflect accuracy preferences. Specifically, while

professional identity, replenished self-control resources, and use of all non-prepopulated

workpapers on engagements each reduce the negative effects of prepopulation on risk

assessment accuracy, the effects persist. Moreover, these characteristics may not be

prevalent on audit engagements. For example, the audit environment can be rife with

factors that deplete self-control resources, e.g., multi-tasking, lack of sleep, and complex

29

Page 32: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

thinking (Buchheit et al. [2016]; Mullis and Hatfield [2018]; Schmeichel et al. [2003];

Lanaj et al. 2014). As another example, only about a quarter of our participants had

experienced exclusively non-prepopulated workpapers on their audit engagements.

We believe our theory generalizes to other audit tasks, so that firms should

consider whether defaults – of prior year results or other content such as lists of standard

substantive tests – will lead to undesired auditor choices; future research could test this

premise. Further, our finding that frequent exposure to prepopulated workpapers may

cultivate a “stick with the default preference,” along with the importance of individual

preferences for shaping team and firm norms (Kesan and Shah [2006]), suggests that

default effects in one audit task could lead to sticking with the default in other audit tasks.

Our study is the first of which we are aware in accounting that examines defaults,

and our theory may generalize to other settings. For example, there likely is variation as

to whether managers preparing budgets use spreadsheets that are prepopulated with prior

period numbers; prepopulation could contribute to the commonly observed behavior of

adding a fixed percentage to those numbers to arrive at current year numbers (Economist

[2009]). Future research also could examine how prepopulation contributes to the

stickiness observed in many accounting settings, e.g., earnings targets for compensation

(Indjejikian et al. [2014]) and debt covenant terms (Kahan and Klausner [1997]).

Finally, we contribute to literature in psychology and economics on defaults (e.g.,

Kahneman [2003]; Just [2014]) by, first, showing that default effects can occur for a

choice that has a “right answer,” such that a preference for accuracy comes into play. The

specific moderators related to this preference that we examine also could be relevant in

other accuracy-based professions in which defaults are used, e.g., medicine. Second, our

30

Page 33: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

evidence that motivated reasoning and superficial processing are associated with

responses to defaults suggests that defaults may lead to poor choices because people are

not using “high quality” cognitive processes in their presence. Third, we show that people

can have context-specific responses to defaults (here, working fast due to the importance

of efficiency in auditing) in addition to the response related to retaining them.

Our study is subject to limitations that offer opportunities for future research.

First, we focus on the effects of prepopulation on effectiveness and efficiency and show

that prepopulation can harm audit effectiveness. Prepopulation also clearly improves

efficiency for the risk assessment task per se, but can harm efficiency of later substantive

testing. Research could explore other possible beneficial and harmful effects of

prepopulation on risk assessment, e.g., on the ability to comply with PCAOB

documentation requirements. Second, we examine the effects of prepopulation in a

setting that does not contain fraud or other extreme risks. It is possible that auditors may

be less inclined to respond to prepopulation by sticking with last year and working fast in

the presence of such risks. Third, we examine the effects of prepopulation with relatively

inexperienced auditors and find few moderating effects of experience. However, results

could differ with auditors with higher levels of experience if that additional experience

gives rise to a factor that would interact with prepopulation in this setting. For example,

the effects of prepopulation could become weaker for experienced audit managers if, e.g.,

most auditors who stay with their firm to that point have strong professional identity. On

the other hand, our finding regarding self-control resources suggests that, if more

experienced auditors are, for example, working longer hours, the effects of prepopulation

could be stronger for these individuals.

31

Page 34: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

REFERENCES

ALLEN, R., D. HERMANSON, T. KOZLOSKI, and R. RAMSAY. “Auditor Risk Assessment: Insights from the Academic Literature.” Accounting Horizons 20 (2006): 157-177.

ANDERSON, J., and L. REDER.”An Elaborative Processing Explanation of Depth of Processing.” In: CERMAK, L. and F. CRAIK (Eds.), Levels of Processing in Human Memory. Hillsdale, NJ: Lawrence Erlbaum Associates, 1979.

ARBUCKLE, J. “IBM SPSS AMOS 24 Users Guide,” 2016. Available at: ftp://public.dhe.ibm.com/software/analytics/spss/documentation/statistics/24.0/en/amos/Manuals/IBM_SPSS_Amos_User_Guide.pdf

BAMBER, L., and V. IYER. “Big 5 Auditors' Professional and Organizational Identification: Consistency or Conflict?” AUDITING: A Journal of Practice and Theory. 21 (2002): 21-38.

BAUER, T. “The Effects of Client Identity Strength and Professional Identity Salience on Auditor Judgments.” The Accounting Review 90 (2015): 95-114.

BAUMEISTER, R., E. BRATSLAVSKY, M. MURAVEN, and D. TICE. “Ego depletion: Is the active self a limited resource?” Journal of Personality and Social Psychology 74 (1998): 1252-65.

BAUMEISTER, R., M. MURAVEN, and D. TICE. “Ego Depletion: A Resource Model of Volition, Self-Regulation, and Controlled Processing.” Social Cognition 18 (2000): 130-150.

BONNER, S. “Experience Effects in Auditing: The Role of Task-Specific Knowledge.” The Accounting Review 65 (1990): 72-92.

BONNER, S. “Is Experience Necessary in Cue Measurement? The Case of Auditing Tasks.” Contemporary Accounting Research 8 (1991): 253-269.

BONNER, S. Judgment and Decision Making in Accounting. Upper Saddle River, NJ: Prentice-Hall, 2008.

BRAZEL, J., C. AGOGLIA, and R. HATFIELD. “Electronic versus Face-to-Face Review: The Effects of Alternative Forms of Review on Auditors’ Performance.” The Accounting Review 79 (2004): 949-966.

BRITT, T. “Engaging the Self in the Field: Testing the Triangle Model of Responsibility.” Personality and Social Psychology Bulletin 25 (1999): 698-708.

32

Page 35: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

BRITT, T. “The Effects of Identity-Relevance and Task Difficulty on Task Motivation, Stress, and Performance.” Motivation and Emotion 29 (2005): 189-202.

BROWN, C., and A. KRISHNA. “The Skeptical Shopper: A Metacognitive Account for the Effects of Default Options on Choice.” Journal of Consumer Research. 31 (2004): 529-539.

BUCHHEIT, S., D. DALTON, N. HARP, and C. HOLLINGSWORTH. “A Contemporary Analysis of Accounting Professionals’ Work-Life Balance.” Accounting Horizons 30 (2016): 41-62.

BYRNE, B. Structural Equation Modeling With AMOS: Basic Concepts, Applications, and Programming. (3rd Eds.) New York, NY: Routledge, 2016.

CAPPELLETTI, D., L. MITTONE, and M. PLONER. “Are Default Contributions Sticky? An Experimental Analysis of Defaults in Public Goods Provisions.” Journal of Economic Behavior and Organizations 108 (2014): 331-342.

CARROLL, G., J. CHOI, D. LAIBSON, B. MADRIAN, and A. METRIK. “Optimal Defaults and Active Decisions.” The Quarterly Journal of Economics 124 (2009): 1639-1674.

CHURCH, B., and L. SHEFCHIK. “PCAOB Inspections and Large Accounting Firms.” Accounting Horizons 26 (2012): 43-63.

DHINGRA, N., Z. GORN, A. KENER, and J. DANA. “The default pull: An experimental demonstration of subtle default effects on preferences.” Judgment and Decision Making 7 (2012): 69-76.

DINNER, I., E.J. JOHNSON, D.G. GOLDSTEIN, and K. LIU. “Partitioning Default Effects: Why People Choose Not to Choose.” Journal of Experimental Psychology: Applied 17 (2011): 332-341.

EARLEY, C. “The Differential Use of Information by Experienced and Novice Auditors in the Performance of Ill-Structured Audit Tasks.” Contemporary Accounting Research 19 (2002): 595-614.

ECONOMIST. “Zero-Base Budgeting,” 2009. Available at: https://www.economist.com/node/13005039

EVANS, A., K. DILLON, G. GOLDIN, and J. KRUEGER. “Trust and Self-Control:

The Moderating Role of the Default.” Judgment and Decision Making 6 (2011): 697-705.

FRIEDMAN, M. Essays in Positive Economics. Chicago, IL: University of Chicago Press, 1953.

33

Page 36: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

FUKUKAWA, H., T. MOCK, and A. WRIGHT. “Client Risk Factors and Audit Resource Allocation Decisions.” ABACUS 47 (2011): 85-108.

GILBERT, D., D. KRULL, and P. MALONE. “Unbelieving the Unbelievable: Some Problems in the Rejection of False Information.” Journal of Personality and Social Psychology 59 (1990): 601-613.

GRIFFITH, E., J. HAMMERSLEY, K. KADOUS, and D. YOUNG. “Auditor Mindsets and Audits of Complex Estimates.” Journal of Accounting Research 53 (2015): 49-77.

HAYES, A. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New York, NY: Guilford Press, 2013.

HAYES, A., and M. SCHARKOW. “The Relative Trustworthiness of Inferential Tests of the Indirect Effect in Statistical Mediation Analysis.” Does Method Really Matter?” Psychological Science 24 (2013): 1918-1927.

HOFFMAN, V., and J. PATTON. “Accountability, the Dilution Effect, and Conservatism in Auditors' Fraud Judgments.” Journal of Accounting Research 35 (1997): 227-237.

HU, L., and P. BENTLER. “Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives.” Structural Equation Modeling: A Multidisciplinary Journal 6 (1999): 1-55.

INDJEJIKIAN, R., M. MATEJKA, K. MERCHANT, and W. VAN DER STEDE. “Earnings Targets and Annual Bonus Incentives.” The Accounting Review 89 (2014): 1227-1258 .

JAIN, S., and D. MAHESWARAN. “Motivated Reasoning: A Depth-of-Processing Perspective.” Journal of Consumer Research 26 (2000): 358-371.

JOHNSON, E., S. BELLMAN, and G. LOHSE. “Defaults, Framing and Privacy: Why Opting In-Opting Out.” Marketing Letters 13 (2002): 5-15.

JOHNSON, E., and D. GOLDSTEIN. “Do Defaults Save Lives?” Science 302 (2003): 1338-1339.

JUST, D.R. Introduction to Behavioral Economics: Noneconomic Factors that Shape Economic Decisions. Hoboken, NJ: John Wiley & Sons. 2014.

KAHAN, M., and M. KLAUSNER. “Standardization and Innovation in Corporate Contracting.” Virginia Law Review 83 (1997): 713-770.

34

Page 37: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

KAHNEMAN, D. “Maps of Bounded Rationality: Psychology for Behavioral Economics.” The American Economic Review 93 (2003): 1449-1475.

KESAN, J., and R. SHAH. “Setting Software Defaults: Perspectives from Law, Computer Science and Behavioral Economics.” Notre Dame Law Review 82 (2006): 583-611.

KNECHEL, W., G. KRISHNAN, M. PEVZNER, L. SHEFCHIK, and U. VELURY. “Audit Quality: Insights from the Academic Literature.” Auditing: A Journal of Practice & Theory 32 (2013): 385-421.

KPMG. Elevating Professional Judgment in Auditing, 2011. Available at: https://university.kpmg.us/audit/audit-resources/summary-of-the-kpmg-professional-judgment-framework.html

KUNDA, Z. “The Case for Motivated Reasoning.” Psychological Bulletin 108 (1990): 480-498.

LANAJ, K., R. JOHNSON, and C. BARNES. “Beginning the Workday Yet Already Depleted? Consequences of Late-Night Smartphone Use and Sleep.” Organizational Behavior and Human Decision Processes 124 (2014): 11-23.

LIBBY, R., K. RENNEKAMP, and N. SEYBERT. “Regulation and the Interdependent Roles of Managers, Auditors, and Directors in Earnings Management and Accounting Choice.” Accounting, Organizations and Society 47 (2015): 25-42.

MADRIAN, B., and D. SHEA. “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior.” The Quarterly Journal of Economics 116 (2001): 1149-1187.

MACCALLUM, R., R. BROWNE, M. SUGAWARA, and M. HAZUKI. “Power Analysis and Determination of Sample Size for Covariance Structure Modeling.” Psychological Methods 1 (1996): 130-149.

MCKENZIE, C., M. LIERSCH, and S. FINKELSTEIN. “Recommendations Implicit in Policy Defaults.” Psychological Science. 17 (2006): 414-420.

MOCK, T., and A. WRIGHT. “An Exploratory Study of Auditors’ Evidential Planning Judgments.” AUDITING: A Journal of Practice and Theory 12 (1993): 39.

MOCK, T., and A. WRIGHT. “Are audit program plans risk-adjusted?” Auditing 18 (1999): 55-74.

MULLIS, C., and R. HATFIELD. “The Effects of Multitasking on Auditors’ Judgment Quality.” Contemporary Accounting Research 35 (2018): 314-333.

35

Page 38: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

MURAVEN, M., D. TICE, and R. BAUMEISTER. “Self-Control as Limited Resource: Regulatory Depletion Patterns.” Journal of Personality and Social Psychology 74 (1998): 774-789.

PAYNE, J., J. BETTMAN, and E. JOHNSON. “Behavioral decision research: A constructive processing perspective.” Annual Review of Psychology 43 (1992): 87-131.

PREACHER, K., and A. HAYES. “Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models.” Behavior Research Methods 40 (2008): 879-91.

PREACHER, K., D. RUCKER, and A. HAYES. “Addressing moderated mediation hypotheses: Theory, methods, and prescriptions.” Multivariate Behavioral Research 42 (2007): 185-227.

PUBLIC COMPANY ACCOUNTING OVERSIGHT BOARD (PCAOB). Staff Audit Practice Alert No. 3: Audit Considerations in the Current Economic Environment, 2008.

PCAOB. AS 2110: Identifying and Assessing Risks of Material Misstatement, 2010.

PCAOB. Staff Audit Practice Alert No. 8: Audit Risks in Certain Emerging Markets, 2011a.

PCAOB. Staff Audit Practice Alert No. 9: Assessing and Responding to Risk in the Current Economic Environment, 2011b.

PCAOB. Report on 2011 Inspection of Deloitte & Touche LLP, 2012.

PCAOB. PCAOB Report Encourages Auditors to Take Action in Response to Risk Assessment Deficiencies Identified in Inspections, 2015.

RUCKER, D., K. PREACHER, Z. TORMALA, and R. PETTY. “Mediation Analysis in Social Psychology: Current Practices and New Recommendations.” Social and Personality Psychology Compass 5 (2011): 359-371.

SCHMEICHEL, B., K. VOHS, and R. BAUMEISTER. “Intellectual Performance and Ego Depletion: Role of the Self in Logical Reasoning and Other Information Processing.” Journal of Personality and Social Psychology 85 (2003): 33-46.

SLOVIC, P. “The Construction of Preference.” American Psychologist 50 (1995): 364-371.

36

Page 39: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

SMITH, N., D. GOLDSTEIN, and E. JOHNSON. “Choice Without Awareness: Ethical and Policy Implications of Defaults.” Journal of Public Policy & Marketing 32 (2013): 159-172.

STEFFEL, M., E. WILLIAMS, and R. POGACAR. “Ethically Deployed Defaults: Transparency and Consumer Protection Through Disclosure and Preference Articulation.” Journal of Marketing Research 53 (2016): 865-880.

TANNENBAUM, D., and P. DITTO. “Information Asymmetries in Default Options.” Working Paper, University of Chicago, 2011. Available at http://home.uchicago.edu/davetannenbaum/documents/default%20information%2 0asymmetries.pdf

THALER, R., and C. SUNSTEIN. “Libertarian Paternalism Is Not an Oxymoron.” The University of Chicago Law Review 70 (2003): 1159-1202.

THALER, R., and S. BENARTZI. “Save More Tomorrow: Using Behavioral Economics to Increase Employee Saving.” Journal of Political Economy 112 (2004): 164-187.

THALER, R., and C. SUNSTEIN. Nudge: Improving Decisions About Health, Wealth, and Happiness. New York, NY: Penguin Books, 2008.

VOHS, K., R. BAUMEISTER, B. SCHMEICHEL, J. TWENGE, N. NELSON,and D. TICE. “Making Choices Impairs Subsequent Self-Control: A Limited-Resource Account of Decision-Making, Self-Regulation, and Active Initiative.” Journal of Personality and Social Psychology 94 (2008): 883–898

37

Page 40: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

FIGURE 1Theoretical Model for Effects of Prepopulation of Current Year Workpapers with Prior Year Assessments on Risk Rating Accuracy

This figure illustrates our theoretical model. Our manipulated variable is prepopulation of workpapers with prior year risk assessments (i.e., ratings and evidence), and our dependent variable is auditors’ accuracy at changing (or leaving unchanged) risk ratings from the prior year depending on whether they changed (or did not change) from the prior year. Link 1 and Link 2 illustrate our expectation of auditors’ behavioral responses to prepopulation of workpapers, and Link 3 and Link 4 illustrate the predicted effect of these responses on auditors’ risk rating accuracy (separately for changing and unchanged risk factors).

Stick-with-last-yearresponse

Default Option Created by

Prepopulating Current Year Workpapers with

Prior Year Assessments

Link 1 Link 3

Work fastresponse

Changing Risks: –Unchanged Risks: +

Link 2 Link 4

Risk Rating Accuracy

Changing Risks: –Unchanged Risks: –

All Risk Types: +

All Risk Types: +

38

Page 41: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

FIGURE 2Results of Process Model for Effects of Prepopulation on Accuracy for Increasing Risk Factors

Prepopulation is our manipulated variable – auditors are provided with either prepopulated or non-prepopulated workpapers. Accuracy for Increasing Risk Factors, measured as the count of increasing risk factors for which auditors correctly moved upward from last year’s risk rating, is the dependent variable. We expect auditors to show two behavioral responses to prepopulation of workpapers: sticking with last year’s risk rating (Stick with Last Year) and working fast on the risk assessment (Work Fast). We measure Stick with Last Year by the total number of times auditors stick to the prior year rating for decreasing and no change risk factors; sticking should reduce accuracy for increasing risk factors. We measure Work Fast by the time spent on the risk assessment task; spending less time should reduce accuracy for increasing risk factors. We adjust the time spent by participants in the prepopulated workpaper condition to make it comparable to the non-prepopulated workpaper condition by subtracting their actual time spent and adding an estimate of the average typing time of participants in the non-prepopulated workpaper condition. We test the model using a structural equations modeling approach. The model fits the data well, as demonstrated by the chi-squared test (χ2

(1) = 0.16, p = 0.690), and other standard measures of fit (CFI = 1.00 and RMSEA = 0.01). Process mediators are accompanied by the 90% bootstrapped confidence intervals (Preacher and Hayes [2008]); we use this approach to formally test the mediation of the paths. All p-values are one-tailed for directional predictions.

Prepopulation

Accuracy for Increasing Risk

Factors

Stick with Last Year (-2.65, -1.57)

Work Fast (-0.43, -0.04)

B = + 4.64p < 0.001

B = + 3.20p = 0.032

B = - 0.45 p < 0.001

B = - 0.06 p = 0.001

B = - 0.79 p = 0.029

39

Page 42: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

FIGURE 3Results of Process Model for Effects of Prepopulation on Accuracy for Decreasing Risk Factors

Prepopulation is our manipulated variable – auditors are provided with either prepopulated or non-prepopulated workpapers. Accuracy for Decreasing Risk Factors, measured as the count of decreasing risk factors for which auditors correctly moved down from last year’s risk rating, is the dependent variable. We expect auditors to show two behavioral responses to prepopulation of workpapers: sticking with last year’s risk rating (Stick with Last Year) and working fast on the risk assessment (Work Fast). We measure Stick with Last Year by the total number of times auditors stick to the prior year rating for increasing and no change risk factors; sticking should reduce accuracy for decreasing risk factors. We measure Work Fast by the time spent on the risk assessment task; spending less time should reduce accuracy for decreasing risk factors. We adjust the time spent by participants in the prepopulated workpaper condition to make it comparable to the non-prepopulated workpaper condition by subtracting their actual time spent and adding an estimate of the average typing time of participants in the non-prepopulated workpaper condition. We test the model using a structural equations modeling approach. The model fits the data well, as demonstrated by the chi-squared test (χ2

(1) = 0.67, p = 0.413), and other standard measures of fit (CFI = 1.00 and RMSEA = 0.00). Process mediators are accompanied by the 90% bootstrapped confidence intervals (Preacher and Hayes [2008]); we use this approach to formally test the mediation of the paths. All p-values are one-tailed for directional predictions.

Prepopulation

Accuracy for Decreasing Risk

Factors

Stick with Last Year (-1.33, -0.60)

Work Fast(-0.26, -0.02)

B =+ 5.32p < 0.001

B = + 3.20p = 0.032

B = - 0.45 p < 0.001

B = - 0.06 p = 0.001

B = - 1.01p < 0.001

40

Page 43: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

FIGURE 4Results of Process Model for Effects of Prepopulation on Accuracy for No Change Risk Factors

Prepopulation is our manipulated variable – auditors are provided with either prepopulated or non-prepopulated workpapers. Accuracy for No Change Risk Factors, measured as the count of no change risk factors for which auditors correctly did not change from last year’s risk rating, is the dependent variable. We expect auditors to show two behavioral responses to prepopulation of workpapers: sticking with last year’s risk rating (Stick with Last Year) and working fast on the risk assessment (Work Fast). We measure Stick with Last Year by the total number of times auditors stick to the prior year rating for increasing and decreasing risk factors; sticking should increase accuracy for no change risk factors. We measure Work Fast by the time spent on the risk assessment task; spending less time should reduce accuracy for no change risk factors. We adjust the time spent by participants in the prepopulated workpaper condition to make it comparable to the non-prepopulated workpaper condition by subtracting their actual time spent and adding an estimate of the average typing time of participants in the non-prepopulated workpaper condition. The model does not fit the data well, as indicated by the chi-squared test (χ2

(1) = 5.94, p = 0.015), and RMSEA = 0.20. However, the CFI indicates good fit (CFI = 0.97). Process mediators are accompanied by the 90% bootstrapped confidence intervals (Preacher and Hayes [2008]); we use this approach to formally test the mediation of the paths. All p-values are one-tailed for directional predictions.

Prepopulation

Accuracy for No Change Risk

Factors

Stick with Last Year (0.99, 1.76)

Work Fast(-0.37, -0.03)

B = + 5.96p < 0.001

B = + 3.20p = 0.032

B = + 0.05 p < 0.001

B = - 0.03 p = 0.001

B = + 0.81p = 0.008

41

Page 44: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

TABLE 1

Panel A – Descriptive Statistics for Dependent Variable (Auditors’ Risk Rating Accuracy, by Risk Factor Type)

Prepopulated Workpapers

Non-Prepopulated Workpapers

Increasing Risks (8 total risk factors) 2.92(2.49)

5.96(1.40)

Decreasing Risks (6 total risk factors) 1.41(1.38)

3.46(1.09)

No Change Risks (5 total risk factors) 4.21(1.24)

2.21(1.76)

Number of Audit Staff Participants 56 61

Panel B – Descriptive Statistics for Mediators (Auditors’ Behavioral Responses to Prepopulation)

Prepopulated Workpapers

Non-Prepopulated Workpapers

Stick with Last Year – Increasing Risks 8.26(2.74)

3.63(2.22)

Stick with Last Year – Decreasing Risks 9.23(3.36)

3.91(2.36)

Stick with Last Year – Unchanged Risks

9.07(4.06)

3.11(1.83)

Work Fast – Across all Risk Factors 16.13(7.67)

19.33(10.87)

42

Page 45: €¦ · Web viewConsistent with Griffith et al. [2015], we use the bias-corrected bootstrapping method (Preacher and Hayes [2008]) to formally test for mediating effects because

Panel C: Tests of Hypotheses – Analysis of Variance (all participants)

DV: Accuracy for Increasing Risk Factors (H1)

df SS MS F P

Prepopulation Condition

1 270.93 270.93 65.11 < 0.001

Error 115 478.52 4.161Corrected Total 116 749.45

DV: Accuracy for Decreasing Risk Factors (H1)

df SS MS F p

Prepopulation Condition

1 123.23 123.23 78.43 < 0.001

Error 115 180.68 1.571Corrected Total 116 303.92

DV: Accuracy for No Change Risk Factors (H2)

df SS MS F p

Prepopulation Condition

1 116.65 116.65 51.27 < 0.001

Error 115 261.66 2.275Corrected Total 116 378.31

We manipulate, between-participants, whether audit workpapers are prepopulated with prior year risk ratings and evidence, compared to non-prepopulated (i.e., left blank). All participants also view prior year risk ratings and evidence in a separate file. Panel A reports descriptive statistics (standard deviations) for the dependent variable (i.e., auditors’ risk rating accuracy). Note that two of the decreasing risk factors were relatively ambiguous, such that alternate answers could be reasonable; when excluding these two risk factors, performance for decreasing risks is 3.11 (1.30) out of 4 for auditors with non-prepopulated (prepopulated) workpapers. Panel B reports descriptive statistics (standard deviations) for the mediators (i.e., auditors’ behavioral responses to prepopulated workpapers). Stick with Last Year is calculated as the total number of times auditors select a current year risk rating that is the same as the prior year risk rating for the other two types of risk factors. Work Fast is measured using the amount of time participants spend on the risk assessment task; we make the measure comparable across workpaper conditions by subtracting the time participants in the prepopulated workpaper condition spend typing, then replacing it with an estimate of the average time auditors in the non-prepopulated workpaper condition spend typing.

Panel C reports tests for the overall effects of prepopulation (Analysis of Variance for effects of prepopulation for the dependent variable, by risk factor type). The dependent variables for H1 are auditors’ accuracy at assessing risk factors that have increased since the prior year (for eight total risk factors, the number of times auditors correctly increase the risk rating in the current year) and decreased since the prior year (for six total risk factors, the number of times auditors correctly decrease the risk rating in the current year). The dependent variable for H2 is auditors’ accuracy at assessing risk factors that have not changed since the prior year (for five total risk factors, the number of times auditors correctly leave the risk rating unchanged in the current year). All p-values are one-tailed for directional predictions.

43