the flow of his or her sensitive medical information is still

15

Transcript of the flow of his or her sensitive medical information is still

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0 207

the flow of his or her sensitive medical information is still

limited. Many situations, such as emergencies, result in

disclosure of more than the minimum PHI necessary. The

Privacy Rule specifically requires covered entities to safeguard

PHI from accidental or intentional use or disclosure. The

primary objective of this paper is to study the causes of such

privacy breaches and to suggest measures to limit or prevent

such accidental or erroneous uses or disclosures.

1.2. Human error and privacy breaches e literaturereview

Reason (1990) defined error as “the failure to achieve an

intendedoutcome in a planned sequence ofmental or physical

activities when failure is not due to chance.” On the other

hand, a malicious act is intentional and done to cause harm.

Reason (1990) also categorized human error into slips and

mistakes. Slips occur as an outcome of the incorrect execution

of a correct action sequence; mistakes occur as an outcome of

the correct execution of an incorrect action sequence, i.e.,

wrong decisions executed correctly. Mistakes, also known as

“planning failures” or “errors of intention,” result from faulty

conceptual knowledge, incomplete knowledge, or incorrect

action specification. Slips are “execution failures,” often

arising from an actor’s lack of skill, confusion, or loss of acti

vation, as in the case of forgetting the original intention.

Reasonuses the term“lapse” todenote “anomission to execute

an action as planned, due to a failure of memory.” For the

purpose of our analysis, the term “slip” encompasses “lapse”.

Rasmussen (1983) classified human performance into three

categories. Of these, skill based behaviors are routine activities

conducted automatically and do not require conscious alloca

tion of attentional resources. Behaviors classified as rule

based are controlled by stored rules or procedures. Finally,

knowledge based behaviors are those in which stored rules

no longer apply and performance is goal oriented. Reason

(1990) analyzed these three behaviors further and developed

the skill rule knowledge (SRK) taxonomy of human error. Slips

and lapses only occur in skill based behaviors, and mistakes

only occur in rule based and knowledge based behaviors.

Senders and Moray (1991) argued that humans err often in

ignorance and sometimes despite a strong determination to

avoid them. Errorsmay arise from processeswithin an actor or

from processes external to the actor. Incorrect input, insuffi

cient knowledge, or wrong action are all causes of error. Addi

tionaldiscussionsandresearchontrendsonthe topicofhuman

error are available in Norman (1988) and Zhang et al. (2004).

The impact of human error has been studied in many

domains, especially aviation (Shappell et al., 2007), online

banking (Kjaerland, 2006), and medicine (Leape et al., 1993;

Weinstein, 2001). Leape et al. (1993) classified medical errors

into diagnostic, treatment related, system related, or

miscommunication errors. The decentralized and fragmented

nature of the healthcare delivery system and organizational

issues such as staffing, procedures, andwork environment are

consideredmajor contributing factors (Weinstein, 2001) to the

occurrence of errors. Two reports sponsored by the Institute of

Medicine (IOM) (2000, 2001) identified medical errors not only

as a leading cause of death in the U.S., but also, of equal

importance, as a critical problem in terms of human and

social costs. The IOM reports motivated numerous

researchers to try to identify the dimensions of medical error

management (Billings and Woods, 2001; Lenert and Bakken,

2002; Patel and Bates, 2003). Zhang et al. (2002), in particular,

argued that cognitive factors are critical at six different levels

of the healthcare systemhierarchy. At the core are individuals

who trigger errors without necessarily being their root cause.

The next higher hierarchical levels are: interactions among

individuals and between groups of people and technologies;

organizational structures such as coordination and commu

nication; institutional functions such as policies and guide

lines; and lastly, national regulatory agencies. Individuals are

at the last stage in the chain of events that lead to an error;

therefore, cognitive interventions are crucial in preventing

human errors. Norman’s seven stage action theory and its

extensions by Zhang et al. (2004) undergird our study of

privacy breaches within the healthcare system.

The Institute for Safe Medication Practices (ISMP) (2002)

recommends incorporating error management into an orga

nization’s strategic plan, proactively identifying error prone

processes, and devising safe alternatives by using process

flow analysis to combat medication errors. Further, the ISMP

(1999) suggests the following measures, in order of priority, to

remedy medication errors: design of processes with forcing

functions and constraints so that errors are virtually impos

sible or difficult tomake; simplification and standardization of

procedures; reminders, checklists, and double check systems;

rules and policies to support error prevention and disciplinary

measures for errant actions; and enhanced education and

awareness. A recent study of four hospitals (McFadden et al.,

2004) identified seven critical factors for successfully reducing

the likelihoodofmedical errorsorminimizing their effects.The

five most important are: (i) a new organizational structure, (ii)

causal analysis and redesigned processes and systems, (iii)

a modified organizational safety culture, (iv) focus on the

process and not the individual, and (v) education and training.

Several recent studies on the organizational context of

human error have recognized the difficulty of eliminating

human error altogether. Walker et al. (2008) addressed safety

concerns in electronic health records (EHRs) and proposed

steps for process improvement as well as ways to build safety

into the initial design of EHRs. Schonbeck et al. (2010) pre

sented a new approach to address human and organizational

factors in the operational phase (rather than in the design

phase) of safety instrumented systems. Targoutzidis (2010)

proposed using fault tree analysis to reduce workplace

process errors. Similarly, Mohaghegh and Mosleh (2009)

studied ways to improve workplace safety by reducing errors

in organizational processes. Leveson (2008) studied the tech

nical and managerial factors in the losses of the NASA Chal

lenger and Columbia space shuttles and concluded that

organizational or managerial flaws negated the potential

engineering and technical solutions. Kraemer and Carayon

(2007) and Kraemer et al. (2009) concluded that human and

organizational factors loom large as the cause of computer

and information security vulnerabilities e a topic closely

aligned with privacy e and emphasized the complex rela

tionships among human and organizational factors.

Human error has also been widely recognized as a main

cause of privacy breaches (Kraemer et al., 2009; Liginlal et al.,

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0208

2009; Schultz, 2005; Stanton et al., 2005; Wood and Banks,

1993). In recent times, consumers have become increasingly

wary of threats to their privacy. A poll conducted by Zogby

International (2007) showed that 85% of the respondents

valued their personal information and considered safeguard

ing their privacy a top priority. Firms that exposed customers’

private information, no matter the reason, have suffered

substantial consequences. In 2005, ChoicePoint was fined $15

million by the Federal Trade Commission to settle charges

that it failed to protect consumers’ private information

(Sullivan 2006). The Privacy Rights Clearinghouse estimated

that during attacks on ChoicePoint, over 100 million customer

records were put at risk of identity theft (Privacy Rights

Clearinghouse, 2010). Privacy breaches have also been

shown to lower the stock market’s opinion of the firms at

fault. In fact, ChoicePoint’s stock price declined more than

18% over the course of a fewweeks after the announcement of

the company’s privacy breaches. Campbell et al. (2003) found

that breaches involving loss of confidential data were associ

ated with significant declines in the stock prices of affected

firms. Acquisti et al. (2006) also found a significantly negative

effect of privacy breaches, especially for larger firmswith high

visibility. These studies clearly demonstrate that privacy

breaches eroded customers’ trust in the firms involved.

Liginlal et al. (2009) investigated publicly reported incidents

of privacy breaches in the U.S. and determined that human

error caused a majority of them. They found that mistakes in

the information processing stage constituted the highest

number of human error related privacy breaches, clearly

highlighting the need for effective policies and their enforce

ment. Their study, however, did not specifically focus on

either the issue of medical privacy or on the specific organi

zational context that caused the error. Although many orga

nizations have had significant breaches of PHI since 2003,

disciplinary measures have not been enforced until recently.

The recent example of Providence Health & Services (PH&S)

demonstrates one of the first instances of stricter enforce

ment (Portland Business Journal, 2010). The PH&S case

involved the theft of laptops, disks, and tapes containing

health records from employees’ cars in Seattle in 2005 and

2006, thus compromising the PHI of nearly 400,000 patients.

This reportedly was a case of human error arising from poor

policy formulation, enforcement, and employee education.

PH&S was required to implement corrective measures and

also was fined $100,000.

1.3. Research questions and organization of the paper

Against this backdrop of pervasive privacy breaches attribut

able to human error, our paper addresses the following key

questions:

1. Should covered entities consider human error an important

cause of noncompliance with HIPAA Privacy Rule?

2. What are the causal factors of human errors that lead to

such noncompliance?

3. What operational strategies and measures related to the

identified causal factors are essential to management of

errors that adversely affect compliance with HIPAA Privacy

Rule?

Our research is founded on Reason’s (1990) GEMS error

typology, Rasmussen’s (1983) SRK framework, and Norman’s

(1988) seven stage action theory. To answer these questions,

we used secondary data analysis, in depth interviews of

privacy officers, and interpretive research methodology. The

study draws upon Zhang et al. (2002)’s cognitive taxonomy of

human error and upon Liginlal et al. (2009)’s study of the

impact of human error on privacy and its overall regulatory

implications.

The remainder of the paper is organized as follows: In

Section 2, we provide preliminary evidence confirming human

error as a major cause of HIPAA violations. Section 3 focuses

on the design of the qualitative research study based on

semistructured interviews of privacy officers of leading U.S.

healthcare organizations in four states. Section 4 presents the

results of an interpretive analysis of the interview transcripts.

This analysis sheds light on the causes of human error leading

to privacy breaches. Then, in Section 5, we apply action theory

to study our compilation of reports of privacy breaches and to

further understand the causal issues underlying breaches and

their impact on HIPAA compliance. In Section 6, based on our

analyses, we develop a framework designed to manage

human error in the context of compliance with HIPAA Privacy

Rule. We conclude the paper by outlining the contributions

and limitations of our work along with a description of

ongoing related research.

2. Does human error affect HIPAAcompliance? Preliminary evidence

Prior research on privacy breaches (e.g., Liginlal et al., 2009)

has relied on the analysis of secondary data from publicly

available resources. From a methodological perspective, such

an approach, subsequently cross validated with data from

prominent public sources, has been used effectively in various

similar studies (Bagchi and Udo, 2003; Cavusoglu et al., 2004;

Sullivan, 2006; Erickson and Howard, 2007). The first part of

our study, which aimed at gathering preliminary evidence,

employs this research methodology. However, the rest of our

study differs from these earlier studies in two respects: (i) the

dataset is up to date and covers from January 1, 2005, to April

30, 2010, and (ii) data pertaining only to healthcare organiza

tions, i.e., privacy breaches with implications pertinent to

HIPAA, were considered. Overall, we compiled 237 incidents of

privacy breaches related to loss of PHI.

We analyzed this compilation and classified these privacy

breaches into those attributable to human error and those

caused by malicious attacks. Two researchers analyzed the

237 incidents independently, and a third researcher helped

resolve the resulting discrepancies. The observed inter rater

agreement was 0.935 (kappa 0.55; 2 tailed p < 0.0001). The

high value of Cohen’s Kappa indicated that the coding

approach was reliable, satisfying Landis and Koch’s (1977)

threshold of 0.77. We also analyzed the cause of each

privacy breach and determined seven categories of causes as

depicted in Figure 1.

Overall, we determined that only 12% of the 237 incidents

involved malicious acts; human error accounted for the

remainder (approximately 88%). In the latter category, we

Fig. 1 e Causes of medical privacy breaches in the U.S.

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0 209

found that 126 (53%) incidents were related to stolen or

missing media containing confidential patient information.

These losses stemmed from poor policy formulation and/or

enforcement in healthcare organizations, as amply demon

strated by the PH&S case mentioned earlier. Although the

theft in these losses is itself amalicious act, the root cause can

be attributed to an employee violating procedures (mistakes)

or to an employee’s act of omission (slip). From the organi

zation’s perspective, the loss of PHI results from failure to

enforce policies and procedures regarding remote computer

use, compounded by the absence of built in safeguards, such

as strong data encryption. Stolen laptops or lost media have

also been recognized in Liginlal et al. (2009) as a significant

cause of privacy loss and/or identity theft.

3. Studying the nature of human error inhealthcare organizations

Our preliminary results justified a more detailed analysis of

the organizational and human issues underlying privacy

breaches. Research in psychology has demonstrated, for

instance, that the tendency for error is strongly influenced by

adverse work conditions (Berner and Moss, 2005). These

conditions include employee stress, fatigue, time pressures,

cognitive overload, understaffing, complex procedures, inad

equate supervision, poor communication, rapid organiza

tional change, and flaws in system design or implementation.

Studying these causal factors from an organizational

perspective yields insight into the nature and types of errors,

the extent of their impact, and techniques to prevent or

mitigate this impact. This study will also help gauge the

importance that healthcare organizations assign to human

error as a cause of privacy breaches and identify themeasures

they have adopted to remedy these underlying causes.

3.1. Privacy officers as subjects of the study

HIPAA requires healthcare organizations to designate privacy

officers whose primary responsibilities include tracking the

use of PHI, setting up complaint procedures and punitive

action guidelines, and developing overall HIPAA policies and

procedures. Besides keeping up with the latest privacy prac

tices, privacy officers also frequently supervise operations and

training. In effect, a privacy officer is the driving force behind

enforcingHIPAA compliance and is, as such, the best source of

information about organizational practices and compliance

history. Although privacy officers vary in their educational

backgrounds, level of understanding of privacy and security

concepts, and skills in policy analysis and decision making,

we contend that their experience allows them to better

comprehend the relationship between human error and

HIPAA privacy breaches. Two privacy officerswith close ties to

the university with which two of the researchers were affili

ated assisted in design of the research. Besides being reliable

information sources, their assistance proved invaluable,

particularly in developing and refining a questionnaire and in

obtaining contact information for potential interviewees.

A list of privacy officers in four U.S. states was compiled by

searching the websites of the state bar associations, the

HIPAA Collaborative (a joint effort of healthcare organizations

to build a platform for implementing HIPAA), and thewebsites

of leading healthcare organizations in the region. The two

experts who helped in the research design also provided

additional contact information and leads. In all, we sent out

solicitations to privacy officers in 35 large and medium sized

healthcare organizations. Only 21 privacy officers responded

to our request for an interview. Each of them was then con

tacted and briefed about the objectives of the study and its

broader impact. Unsurprisingly, they were reluctant to

participate because of the confidential nature of the subject.

Multiple follow up requests and signed confidentiality agree

ments were necessary to gain their participation. Ultimately,

only 15 privacy officers agreed to be interviewed.

3.2. The interview protocol

The interview method is one of the most important data

gathering instruments in qualitative research. Interviews

may be a highly structured and quantitative oral survey,

a semistructured guided conversation, or a free flowing

information exchange (Myers and Newman, 2007). Reason

(1990) ranked questionnaire studies as the second most

important research method in human error and safety

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0210

studies. Leveson et al. (2005), while testing the hypothesis that

safety culture of National Aeronautics and Space Adminis

tration (NASA) can be modeled using systems dynamics,

collected most of their data by interviewing the employees of

the manned space program. Kramer and Carayon (2007)

explored the issue of human error and violations of informa

tion security through a series of 16 interviews with network

administrators and security specialists.

Based on a preliminary review of the literature (Bogner,

1994; McFadden et al., 2004; U.S. Department of Health &

Human Services, 2002, 2006; Vincent et al., 1998) and the

assistance of our two experts, we developed a semistructured

questionnaire. In constructing the questionnaire, we consid

ered the influence of both organizational factors and

employees’ knowledge and skill levels on the likely incidence

of human error. The organizational factors specifically

considered were: understaffing, high turnover, low morale,

heavy workload, and work environment.

The questionnaire had two parts. The first sought to

understand the background of the organization and gauge the

interviewee’s depth of experience in HIPAA administration,

particularly compliance with HIPAA Privacy Rule. The second

part, as described below and outlined in Appendix B, was

divided into three sections designed to understand the rela

tionship between human error and HIPAA privacy breaches

and to study the underlying causes, impact, andmanagement

of human error.

1. The first section sought to evaluate the extent of human

error as a cause of privacy breaches, categorize the causes,

and understand the impact of human error compared with

other forms of threats to privacy.

2. The second section aimed to determine the interviewee’s

assessment of the relationship between employees’ skill

levels and the frequency of errors. The questions were

designed to study the differences, in magnitude and

impact, among various types of human errors, i.e., slips and

mistakes, in a healthcare setting. Particular care was taken

in designing this section of the questionnaire to instruct the

interviewees not to emphasize human error as a direct

cause of privacy breaches, but to focus instead on the

human and environmental factors that cause the errors

leading to these breaches.

3. The third section examined what priority upper manage

ment gave to combating human error as a cause of privacy

breaches and the operational strategies required to handle

these incidents and manage human error. The interviewee

was also asked to recount specific privacy breaches that

occurred in his or her organization and how these incidents

were handled.

The interview protocol not only required privacy officers to

relate human error to HIPAA privacy breaches, and equally

important, it encouraged their creative and contextual

thought processes related to the avoidance, interception, and

mitigation of human error. The interview questions

purposefully avoided emphasis on terms such as mistake and

slip; instead, they used similar words or terms, such as inade

quate knowledge (mistake) and poor skills or lapse of concentration

(slip). The intention was to elicit early in the interview the

interviewee’s general perception of human error as a risk for

HIPAA privacy breaches. Open ended questions, with inter

viewees prompted at appropriate moments, were used to

elicit additional information. Three illustrative examples

corresponding to the skill rule knowledge taxonomy of

human error (Rasmussen, 1983; Reason 1990), created in

consultation with our experts, were included in Part I of the

questionnaire and distributed in advance to the interviewees.

The experts felt this was critical so that the interviewees could

gain a sufficient understanding of the term human error and

comprehend the likely causes of human error. To stimulate

contextual thinking, the research team asked the inter

viewees to relate their experiences in managing human error

related issues to their subsequent drafting of policies or action

plans. At the conclusion of each interview, the researchers’

notes were transcribed and organized question by question

into a text only electronic format.

Asmentioned earlier, only 15 of the 21 privacy officers who

responded to our request ultimately agreed to be interviewed.

Eleven interviews were face to face, each lasting about an

hour at the officers’ workplaces. Because of scheduling and

logistical problems, four other interviews were done by tele

phone. The first set of interviews was conducted in two states

in the Midwest over three months and the second set in two

states in the South over two months. Nine interviews were

conducted by two researchers and the rest by one researcher.

The privacy officers represented five medium sized hospitals

with 500 to 700 beds and multiple primary clinics, three large

academic health systems, and three large medical centers

comprising a hospital, pharmacy, and health center. Given its

qualitative nature and the confidential nature of the data, our

study involving two privacy officers in the design phase and 15

other privacy officers in the interview phase, may be compa

rable, in terms of design validity, to similar interview studies

in the information systems discipline (Myers and Newman,

2007), the applied ergonomics literature (Kraemer &

Carayon, 2007), and in the information security area

(Kraemer et al., 2009).

Approximately a week after each interview, a follow up

questionnaire was mailed to the interviewees to probe for

any additional thoughts they did not articulate during the

interview. The study team then combined and contrasted the

findings from the literature review and from the interviews to

identify areas of overlap, agreement, or disagreement. The

notes from the interviews were combined, and any discrep

ancies in these notes were resolved with the help of a third

researcher. To protect anonymity and provide confidentiality,

no interviewee names were recorded in any document

regardless of format or media.

3.3. Analyzing the causes of human error

A list of eight possible causes of human error that lead to

privacy breaches was compiled, as shown in Table 1. Our two

experts worked with us independently to create this consoli

dated list. During our interviews, we asked each privacy

officer to identify all possible causes of human error he or she

could think of that would lead to privacy breaches. Whenever

we sensed a lack of consensus or confusion of terminology, we

asked the interviewees for further clarification. Our post

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0212

staff should, therefore, be a high priority in gaining

compliance with HIPAA.

In summary, the most significant finding was a consensus

that more than 90% of known HIPAA privacy breaches are

unintentional and nonmalicious, an indication that human

error, whether mistakes or slips, constitutes the most signif

icant threat to privacy. This number also tallies with the

preliminary evidence of the causes of publicly reported inci

dents of privacy breaches listed in Figure 1.

5. Applying action theory to analyze incidentreports

To better understand the nature of human error in organi

zations, we extended our analysis beyond the GEMs model

by applying the seven stage action theory of Norman (1988)

and the taxonomy of medical error of Zhang et al. (2004).

Norman (1988) represented human action as a sequence of

stages encompassing the formation of goals, their execution,

and their evaluation. The seven stages consist of: formu

lating a goal, transforming the goal into an intention, spec

ifying an action sequence, executing the action, perceiving

what has actually happened, interpreting what has

happened, and, finally, evaluating the outcome and

comparing it to the original goals and intentions. Investi

gating and analyzing these seven stages that human actors

go through constitute good ways to understand how errors

occur.

Zhang et al. (2004) developed a cognitive taxonomy of

human errors based on Reason’s GEMS error typology and

Norman’s action theory. Human error, both slips and

mistakes, may occur at any of the seven stages. Such errors

may further be categorized into either execution errors or

evaluation errors. Besides goal errors, execution errors can

occur at the stages of intention, action specification, and

action execution. Evaluation errors can occur at the percep

tion, interpretation, or action evaluation stages. This cognitive

taxonomy offers a systematic and theory based approach to

the categorization of errors at an individual’s cognitive level

and also in relation to the use of technology. Further, the

taxonomy can be used for the development of cognitive

interventions and corrective actions.

An important question in the unstructured part of the

interview required privacy officers to recall actual privacy

breaches in their organizations that were caused by human

error. At the end of each interview, we also requested access

to documented privacy breaches from the respective orga

nization. These requests were accompanied by guarantees

of strict privacy. We repeated this request in a follow up e

mail. In all, we compiled 38 incidents from these three

sources, i.e., interviews, de identified organizational inci

dent reports, and the publicly available incident reports

discussed in Section 2. Two researchers independently

analyzed the compiled information and categorized the

incidents based on the error taxonomy. A third researcher

helped reconcile several initial discrepancies in coding. The

inter rater agreement was 0.814 (kappa 0.65; 2 tailed

p < 0.0001). This value is acceptable, according to Landis and

Koch (1977). The study based on the three reported sources

of breaches generated vignettes corresponding to only 12 of

the 14 items in Zhang et al. (2004)’s taxonomy. The two

missing cases were constructed through later discussions

among the researchers and with the two experts. Tables 2

and 3 contain the resulting vignettes for each of the 14

categories.

The first category of incidents comprises errors related to

the formation of goals. As Norman (1988) states, “To get

something done, you have to start with some notion of what is

wanted, i.e., the goal that is to be achieved.” Goal slips arise

from forgetting a goal or its distortion. Goal mistakes, in

contrast, arise most often from faulty knowledge. The two

goal examples in Table 2, respectively, apply to goals forgotten

because of interruptions and goal mistakes attributable to

incorrect or incomplete knowledge.

An intention is the decision to act to achieve a goal.

Intentional slips arise from forgetting an intention or because

of altered intent. Intentional mistakes arise from incorrect or

incomplete intentions, such as deciding to act upon a correct

goal of providing useful care related information, but select

ing faulty heuristics. The action specification stage involves

specifying the sequence of actions required to achieve the goal

and the intention. Table 2 shows an example of an action

specification slip that can arise from activation of a similar but

different sequence. Action specification mistakes can arise

from specifying an incorrect set of rules that create procedural

errors.

The next stage in the action cycle is the physical act of

executing a specified action sequence. Slips can arise from

automatic activation of a well learned routine or percep

tual confusion, as in the case illustrated in Table 2. Simi

larly, execution mistakes arise from the misapplication of

good rules (i.e., ones that “work” or are adequate if

followed).

Evaluation starts with an actor’s perception of the state of

the system. Table 3 presents the various causes of errors in

evaluation and gives examples of violations. Perception slips

may occur because of a lack of perception or an incorrect

perception. The example in Table 3 is a case of incorrect

perception.

Mistakes in perception, on the other hand, can stem from

an expectation driven processing of the perception of the

state of the system. The next stage is the interpretation of the

results of perceptual processing. Confirmation bias can give

rise to interpretation slips. Mistakes in interpretation may

arise from incorrect or incomplete knowledge. The final stage

in an action is the comparison of the system state with the

original goal and intention. Evaluation slips may arise from

forgetting the original goal, and evaluationmistakesmay arise

from incorrect knowledge, as the two examples in Table 3

illustrate.

6. Managing human error for HIPAAcompliance

The next discussion first presents the prioritized causes of

human error as identified by the privacy officers. Then, using

the management strategies they suggested for each of these

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0216

to constrain user behavior. In 10 of the 14 vignettes described

in Appendix C, the root cause can be addressed using such

technology based solutions. For example, when a sensitive

document is printed, a sensor on the printer may be acti

vated to ensure timely removal of the document and if

necessary, trigger an audible alert. In Table 5, we list the

relevant strategies under two categories e providing better

decision aids and using behavior shaping constraints. Using

memory aids, such as daily reminders that pop up on the

welcome screen or a checklist of important HIPAA policy

guidelines, falls under the former category. Also, the use of

RFID technology facilitates the tracking of sensitive paper

documents and ensures their safe disposal. Categorized

under behavior shaping constraints are built in compliance

checks in application software and network appliances.

Examples include the automatic identification and inter

ception of outbound e mail containing sensitive information

and delayed sending of e mail messages to allow for recall. It

is also pertinent to emphasize that although technology can

help implement a defense in depth strategy founded on

error avoidance, error interception, and error correction

(Liginlal et al., 2009), policies and processes play an equally

important role.

7. Conclusions, limitations, and futureresearch

In this paper, we used Reason’s GEMS typology and Zhang

et al.’s (2004) taxonomy of human error, which extends

Norman’s seven stage action theory, to categorize human

error underlying privacy violations. Studying publicly avail

able datasets of incidents of privacy breaches helped show

that human error causes a majority of reported violations of

HIPAA Privacy Rule. Our study of the underlying causes of

these errors adopted an interpretive research methodology

based on an analysis of semistructured interviews with 15

privacy officers of leading U.S. healthcare organizations. The

results revealed that these organizations have difficulty

preventing and coping with human error when these errors

(1) are systemic, (2) belong to the knowledge based mistake

category, and (3) are committed by the clinical staff.

Understandably, privacy officers noted the need for more

practical and efficient privacy policies that give clearer

guidelines to clinical staffs on how to avoid violating HIPAA

Privacy Rule. Norman’s seven stage action theory allows us

not only to study the underlying causes of human error but

also to propose strategies to prevent or mitigate their effects.

We also compiled and then mapped privacy breach vignettes

to the action based taxonomy to study the association

between each type of error and the underlying cognitive

mechanism. Our results led us to devise a framework and

operational strategies for healthcare organizations to use to

combat human errors that lead to noncompliance with

HIPAA Privacy Rule.

To summarize, this paper’s major contribution is its

consideration of human error as the origin of most viola

tions of HIPAA Privacy Rule. This study reinforces an earlier

general study of privacy breaches by Liginlal et al. (2009).

The other significant contributions include the application

of Zhang et al., (2002)’s taxonomy of human error to

a privacy context and development of a practical frame

work to address human error as a major cause of privacy

breaches and a key factor in compliance with HIPAA

Privacy Rule.

The study has several limitations. First, the adverse impact

of human error may not always appear immediately, making

it even harder to detect and remedy. Ill defined processesmay

have a damaging effect that is not easily observable. For

instance, if two employees must talk by telephone to enter

clinical data about a patient, others are likely to overhear the

phone conversation. Second, the notion of what constitutes

human error is often ambiguous because it is based on the

result of human activities, not on the activities themselves.

From this perspective, only activities with negative outcomes

are regarded as human error. Often the blame for human error

goes directly to the individual who committed the error

despite the possibility that systemic problemsmay be at fault.

When the problems are systemic, replacing or training indi

viduals brings little improvement, and process level or

organization level changes are recommended. This is why the

blame game should be replaced by amore proactive incentive

based program to deal with human error (ISMP, 1999). The

different causes of human error according to action theory

show many scenarios in which human error can be

a byproduct of environmental or organizational limitations.

Our study, however, does not clearly address these differences

between the cognitive deficiencies of individuals and the

systemic causes of error.

Another shortcoming of this study relates to the limited

number of judges that we had access to; this could hinder the

generalizability, and as a result, the external validity of our

findings. Given the sensitivity of the subject of HIPAA privacy,

the limited number of sources is a difficult issue to deal with,

because few privacy officers are willing to discuss their

experiences for fear of divulging more than they should. We

argue, however, that the Raschmethod addresses some of the

concerns surrounding the use of small sample sizes. In fact,

the Rash method has been shown to be a “somewhat restric

tive form of validity and reliability assessment” (Nam et al.

2011; p. 150). Further, the fact that privacy officers are

subject matter experts on HIPAA privacy also strengthens the

external validity of our results.

Threats to construct validity result from possible subjec

tivity in data collection and could bias the results. In our study,

the construct “human error” is an often broadly used term

that each interviewee may interpret differently. Some may

regard the noncompliance itself as an error and others

consider the error as a cause of the noncompliance. We

attempted to mitigate any construct validity by following

a carefully designed interview questionnaire. Given that the

interviewees are privacy officers with expertise in HIPAA

privacy matters and quite aware of HIPAA terminology, the

misinterpretation of the human error construct is less

pronounced, as demonstrated by the results of the Rash

analysis.

Finally, there could be issues with the internal validity of

our results regarding the causal effects on human error of the

various identified items. The general nature of the items

(organizational limitations, technology limitations, etc.) that

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0 219

Bogner MS. Human error in medicine. Hillsdale, New Jersey:Lawrence Erlbaum; 1994.

Campbell K, Gordon LA, Loeb MP, Zhou L. The economic cost ofpublicly announced information security breaches: empiricalevidence from the stock market. Journal of Computer Security2003;11:431 48.

Cavusoglu H, Mishra B, Raghunathan S. The effect of Internetsecuritybreachannouncementsonmarketvalue: capitalmarketreactions for breached firms and Internet security developers.International Journal of Electronic Commerce 2004;9:69 104.

Cranor L. A framework for reasoning about the human in theloop. Proceedings of the 1st Conference on Usability.Psychology and Security. San Francisco, CA [cited 2010October 3]. Available from, <http://www.usenix.org/event/upsec08/tech/full papers/ cranor/cranor.pdf>; 2008.

Erickson K, Howard PN. A case of mistaken identity? Newsaccounts of hacker, consumer, and organizationalresponsibility for compromised digital records. Journal ofComputer Mediated Communication 2007;12:1229 47.

Gladwell M. Blink: the power of thinking without thinking.New York: Little Brown and Company; 2005.

ISMP. Medication error prevention “Toolbox.” ISMP MedicationSafety Alert 1999 June 2 [cited 2010 May 28]. Available from,<http://www.ismp.org/MSAarticles/Toolbox.html>.

ISMP. Organizations release new tools for reducing medicationerrors. ISMP News Release 2002 Dec 10 [cited 2010 May 28].Available from: <http://www.ismp.org/pressroom/PR20021210.pdf>.

Institute of Medicine. To err is human: building a safer healthsystem. Washington, DC: National Academy Press; 2000.

Institute of Medicine. Crossing the quality chasm: a new healthsystem for the 21st century. Washington, DC: NationalAcademy Press; 2001.

Jiang X, Landay JA. Modeling privacy control in context awaresystems. IEEE Pervasive Computing 2002;1:59 63.

Kjaerland M. A taxonomy and comparison of computer securityincidents from the commercial and government sectors.Computers & Security 2006;25:522 38.

Kolter J, Kernchen T, Pernul G. Collaborative privacymanagement. Computers & Security 2010;29(5):580 91.

KraemerS,CarayonP.Humanerrorsandviolations incomputerandinformation security: the viewpoint of network administratorsand security specialists. Applied Ergonomics 2007;38:143 54.

Kraemer S, Carayon P, Clem J. Human and organizational factorsin computer and information security: pathways tovulnerabilities. Computers & Security 2009;28(7):509 20.

Kulynych J, Korn D. The effect of the new federal medical PrivacyRule on research. New England Journal of Medicine 2002;346:201 4.

Landis JR, Koch GG. Measurement of observer agreement forcategorical data. Biometrics 1977;33:159 74.

Leape LL, Lawthers AG, Brennan TA, Johnson WG. Preventingmedical injury. Quality Review Bulletin 1993;19(5):144 9.

Lenert LA, Bakken S. Enabling patient safety through informatics:works from the 2001 AMIA annual symposium andeducational curricula for enhancing safety awarenessEditorial introduction. Journal of the American MedicalInformatics Association 2002;9(6):S1 132.

Leveson NG. Technical and managerial factors in the NASAchallenger and Columbia losses. In: Kleinman DL, CloudHansen KA, Matta C, Handelsman J, editors. Looking forwardto the future controversies in science & technology Volume2: from climate to chromosomes. Mary Ann Liebert, Inc; 2008.

Leveson NG, Cutcher Gershenfeld J, Carroll JS, Barrett B, Dulac N,Marais K. Systems approaches to safety: NASA and the SpaceShuttle disasters. In: Starbuck WH, Farjoun M, editors.Organization at the limit: lessons from the Columbia disaster.Malden, MA: Blackwell Publishing; 2005. p. 269 88.

Liginlal D, Sim I, Khansa L. How significant is human error asa cause of privacy breaches? An empirical study anda framework for error management. Computers & Security2009;28:215 28.

McFadden KL, Towell ER, Stock GN. Critical success factors forcontrolling and managing hospital errors. QualityManagement Journal 2004;11(1):61 4.

Mohaghegh Z, Mosleh A. Measurement techniques fororganizational safety causal models: characterization andsuggestions for enhancements. Safety Science 2009;47:1398 409.

Myers MD, Newman M. The qualitative interview in IS research:examining the craft. Information & Organization 2007;17(1):2 26.

Nam SK, Yang E, Lee SM, Lee SH, Seol H. A psychometricevaluation of the career decision self efficacy scale withKorean students: a Rasch model approach. Journal of CareerDevelopment 2011;38:147 66.

Norman DA. The psychology of everyday things. New York: BasicBooks; 1988.

Patel VL, Bates DW. Cognition and measurement in patient safetyresearch guest editorial. Journal of Biomedical Informatics2003;36(1 2):1 3.

Portland Business Journal. Providence to pay $100K for HIPAAviolations, [cited 2010 Sep 9]. Available from, <http://www.bizjournals.com/portland/stories/2008/07/21/daily9.html>.

Privacy Rights Clearinghouse. A chronology of data breaches[cited 2010 October 1]. Available from, <http://www.privacyrights.org/ar/ChronDataBreaches.htm>; 2010 Oct.

Rasch G. Probabilistic models for some intelligence andattainment tests. Chicago: Mesa Press; 1960.

Rasmussen J. Skills, rules, and knowledge signals, signs, andsymbols, and other distinctions in human performancemodels. IEEE Transaction on Systems Man and Cybernetics1983;13:257 66.

Reason J. Human error. Cambridge, UK: Cambridge UniversityPress; 1990.

Schonbeck M, Rausand M, Rouvroye J. Human and organizationalfactors in the operational phase of safety instrumentedsystems: A new approach. Safety Science 2010;48:310 8.

Schultz E. The human factor in security. Computers & Security2005;24:425 6.

Senders J, Moray N. Human error: cause, prediction, andreduction. Hillsdale, New Jersey: Lawrence Erlbaum; 1991.

Shappell S, Detwiler C, Holcomb K, Hackworth C, Boquet A,Wiegmann DA. Human error and commercial aviationaccidents: an analysis using the human factors analysis andclassification system. Human Factors: The Journal of theHuman Factors and Ergonomics Society 2007;49:227 42.

Sim, I. Online Information Privacy and Privacy ProtectiveBehavior: How Does Situation Awareness Matter? Ph.D.Dissertation, The University of Wisconsin Madison; 2010.

Stanton JM, Stam KR, Mastrangelo P, Jolton J. Analysis of end usersecurity behaviors. Computers & Security 2005;24:124 33.

Sullivan B. “La Difference” is stark in EU, U.S. privacy laws.MSNBC [Internet] [cited 2010 May 28]. Available from, <http://www.msnbc.msn.com/id/15221111/>; 2006 Oct.

Targoutzidis A. Incorporating human factors into a simplified“bow tie” approach for workplace risk assessment. SafetyScience 2010;48:145 56.

US Department of Health & Human Services. Standards forprivacy of individually identifiable health information. FederalRegister 2002;67(157):53181 273.

US Department of Health & Human Services. HIPAA securityguidance [cited 2010 May 28]. Available from, <http://www.compliancehome.com/resources/HIPAA/Regulations/abstract10416.html>; 2006 Dec.

c om p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 2 0 6 2 2 0220

Zogby International. Most Americans worry about identity theft[cited 2010 October 1]. Available from, <http://www.zogby.com/NEWS/readnews.cfm>; 2007 April.

Vincent C, Taylor Adams S, Stanhope N. Framework foranalyzing risk and safety in clinical medicine. British MedicalJournal 1998;316:1154 7.

Walker JM, Carayon P, Leveson N, Paulus RA, Tooker J, Chin H,Bothe A, Stewart WF. EHR Safety: the way forward to safe andeffective systems. Journal of the American MedicalInformatics Association 2008;15(3):272 7.

Weinstein A. The bandwagon is outside waiting. HealthManagement Technology 2001;22(5):50 2.

Westin A. Privacy and freedom. New York: Athenaeum; 1967.Winsteps. [Computer software]. Rasch Measurement Software

and Publications [cited 2010 May 28]. Available from, <http://www.winsteps.com>; 2010 September.

Wood CC, Banks Jr WW. Human error: an overlooked butsignificant information security problem. Computers &Security 1993;12:51 60.

Wright BD, Masters GN. Rating scale analysis. Chicago: MESAPress; 1982.

Zhang J, Patel VL, Johnson TR, Shortliffe EH. Toward an actionbased taxonomy of human error in medicine. Proceedings ofthe Twenty Fourth Annual Conference of the CognitiveScience Society; 2002:970 5.

Zhang J, Patel VL, Johnson TR, Shortliffe EH. A cognitivetaxonomy of medical errors. Journal of Biomedical Informatics2004;37:193 204.

Divakaran Liginlal (Lal) is an Associate Teaching Professor ofInformation Systems at Carnegie Mellon University. Lal holdsa B.S. in Telecommunication Engineering from CET, a M.S. inComputer Science from the Indian Institute of Science, and a Ph.D.in Management Information Systems from the University of Arizona. Before joining CMU, he taught at three U.S. universities,including nine years at the University ofWisconsin Madison. Lal’sresearch in information security and decision support systemshas been published in such journals as CACM, IEEE TKDE, IEEESMC, European Journal of Operational Research, Computers &Security, Decision Support Systems, and Fuzzy Sets & Systems.Lal’s teaching and research have been supported by organizationssuch as Microsoft Corporation, Hewlett Packard, CISCO, and theICAIR at the University of Florida.

Inkook Sim holds a Ph.D. in information systems from theDepartment of Operations and Information Management at theUniversity ofWisconsin Madison School of Business. His researchinterests are in information privacy, social networking, contextaware system design, and IT productivity and evaluation. Dr.Sim has published his research in the Computers and SecurityJournal and peer reviewed international conferences. He isa member of the Association for Information Systems andINFORMS. Contact him at [email protected].

Lara Khansa is an Assistant Professor in the Department ofBusiness Information Technology at Virginia Tech. She receivedher Ph.D. in Information Systems, M.S in Computer Engineering,and MBA in Finance and Investment Banking from the Universityof Wisconsin, Madison. Her primary research interests includesocio economic aspects of information security and privacy,convergence and restructuring of the IT industry, and informationtechnology innovations. Dr. Khansa has published in leadingjournals including Decision Support Systems, European Journal ofOperational Research, Communications of the ACM, Computersand Security, IEEE Technology and Society, among others. She isa member of the Association for Information Systems (AIS), theInstitute of Electrical and Electronics Engineers (IEEE), and theBeta Gamma Sigma National Honor Society.

Paul Fearn is a National Library of Medicine Informatics ResearchFellow and PhD student atUniversity ofWashingtonDepartment ofMedicalEducationandBiomedical Informatics.His current researchis in the area of formal models for biorepository processes andsystems, andhe isworkingwith theClinical ResearchData Systemsgroup at Fred Hutchinson Cancer Research Center and SeattleCancer CareAlliance. Previously, Paulwas the InformaticsManagerfor the Department of Surgery and the Office of Strategic Planningand Innovation at Memorial Sloan Kettering Cancer Center(MSKCC), and he represented MSKCC in the Inter SPORE ProstateBiomarker Study (IPBS) Informatics effort. He initiated and led theCaisis project, creating an open source system to manage patientcare and research information that is currently used at over 25centers. Paul has a B.A. in Spanish from the University of Houston,graduate work in biometry from the University of Texas School ofPublic Health in Houston, and an M.B.A. from the New YorkUniversity Stern School of Business. He has over 12 years of experience in cancer research informatics with MSKCC and BaylorCollege ofMedicine; and hehas recently visited 60 cancer centers todiscuss database and informatics efforts and issues.