Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant...

19
Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt and Miroslaw Malek Department of Computer Science Humboldt University Berlin Unter den Linden 6 10099 Berlin, Germany Phone: +49 (0)30 2093 3037 FAX: +49 (0)30 2093 3029 Email: {goldschm|malek}@informatik.hu-berlin.de There are several methods and tools that identify critical hardware and soft- ware components in order to improve system dependability. In complex com- puter architectures and IT services no comprehensive, analytical approach to availability enhancement of Enterprise Information Systems such as data cen- ters or computing clouds exists. The main reasons why the traditional modeling and evaluation methodologies frequently fail are information systems dynam- icity (frequent updates, upgrades and reconfigurations), the large number of degrees of freedom and the lack of transparency. Reference process models have been developed in industry as an alternative to the analytical approaches for optimization of Enterprise Information Systems. Reference process models es- tablish and improve corresponding processes to ensure acceptable availability levels. In this paper, we examine six main reference process models such as IT Infrastructure Library and Control Objectives for Information and Related Technology. We propose a quantifiable approach to prioritize key indicators having impact on availability of Enterprise Information Systems. Using refer- ence process models, we analyze availability-relevant processes and develop an evaluation method to derive the relevance of indicators with respect to avail- ability. The indicator rank defines the order of most effective strategical actions that enable the effective and efficient availability enhancement of Enterprise Information Systems. Keywords: IT Governance; IT Service Management; Enterprise Information System Management, Reference Process Models; ITIL R ; CobiT R ; CMMI R ; SPiCE; Availability; Dependability

Transcript of Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant...

Page 1: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

Ranking of Dependability-Relevant Indicators forAvailability Enhancement of Enterprise Information

Systems

Tobias Goldschmidt and Miroslaw Malek

Department of Computer ScienceHumboldt University Berlin

Unter den Linden 610099 Berlin, Germany

Phone: +49 (0)30 2093 3037FAX: +49 (0)30 2093 3029

Email: {goldschm|malek}@informatik.hu-berlin.de

There are several methods and tools that identify critical hardware and soft-ware components in order to improve system dependability. In complex com-puter architectures and IT services no comprehensive, analytical approach toavailability enhancement of Enterprise Information Systems such as data cen-ters or computing clouds exists. The main reasons why the traditional modelingand evaluation methodologies frequently fail are information systems dynam-icity (frequent updates, upgrades and reconfigurations), the large number ofdegrees of freedom and the lack of transparency. Reference process models havebeen developed in industry as an alternative to the analytical approaches foroptimization of Enterprise Information Systems. Reference process models es-tablish and improve corresponding processes to ensure acceptable availabilitylevels.

In this paper, we examine six main reference process models such as ITInfrastructure Library and Control Objectives for Information and RelatedTechnology. We propose a quantifiable approach to prioritize key indicatorshaving impact on availability of Enterprise Information Systems. Using refer-ence process models, we analyze availability-relevant processes and develop anevaluation method to derive the relevance of indicators with respect to avail-ability. The indicator rank defines the order of most effective strategical actionsthat enable the effective and efficient availability enhancement of EnterpriseInformation Systems.

Keywords: IT Governance; IT Service Management; Enterprise Information SystemManagement, Reference Process Models; ITILR©; CobiTR©; CMMIR©; SPiCE; Availability;Dependability

Page 2: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

2

1. Introduction

As information technology (IT) systems are becoming ubiquitous, diverse and more pow-erful than ever at the same time their dynamics and complexity grow. Hence, the expensescaused by system failure have increased. Moreover, the demand for controlling instru-ments regarding critical business processes, optimizing maintenance and IT consolidationis on the rise. In addition to complexity and dynamicity, another main reason for thissituation is the addition of new functionalities, often by cumulative integration of alreadyexisting legacy systems. This is being done without adequate comprehension of the inter-action among the components. In a nutshell, traditional methods to capture and analyzethe system state or to enhance its dependability are not keeping up with the high com-plexity and interconnectivity growth of industrial systems. Analytical approaches do notscale up for real systems and often fail because of a large number of degrees of freedom(Hoffmann et al. 2007, Hoffmann and Malek 2006). Furthermore, the maintenance costsincluding licenses are main factor in the Total Cost of Ownership (TCO) reaching insome cases 95% of the TCO.

Enterprise Information Systems (EIS) such as data centers and computing clouds areexpected to have their IT processes implemented in accordance with well-establishedstandards. Implementing IT processes in an organized manner as proposed in genericreference process models (RPMs) such as the IT Infrastructure Library (ITIL R©) or Con-trol Objectives for Information and Related Technology (CobiT R©) which facilitate theirtraceability, assessability and comparability, results typically in higher dependability.

One of the main applications of IT is running services to support business processes.These services do not only have to be technically implemented but also organizationallydeployed, maintained and executed to be able to react flexibly to changing needs and oc-curring failures. High-quality processes and sequences of operation supported by qualifiedstaff and appropriate software tools are necessary for the successful handling of such ITservices. Current approaches to service dependability evaluation focus mainly on hard-ware and software resources (Milanovic et al. 2008, Malek et al. 2008). A comprehensiveevaluation is only possible if Enterprise Information System, include infrastructure andpersonnel as well.

Generic IT RPMs have been developed to describe business processes in idealized form(best practice). They focus on business processes incorporating interaction of software,hardware, infrastructure and people.

Definition 1.1: Enterprise Information SystemAn Enterprise Information System of an organization (‘an enterprise’) is an assemblyof technical components such as hardware, software, infrastructure and personnel as wellas organizational IT processes and relations among them based on established genericreference process models.

In this paper, we focus on Enterprise Information System processes which govern busi-ness processes related to the management of IT infrastructure. RPMs align EnterpriseInformation System processes in accordance with given standards. One of the importantand popular models – CobiT R© (ITGI 2007) – addresses dependability as an umbrellaterm. Other generic reference models such as ITIL R© (OGC 2007), CITIL R© (wibas 2007),CMMI R© (CMMI Product Team 2010), SPiCE Lite [ITSM] (Nehfort 2007, Steinmannand Stienen 2002) or MOF(Norton-Middaugh et al. 2008) address subtopics of depend-

Page 3: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

3

ability, e.g. reliability. In addition, availability, which frequently is an important propertyto clients, is part of dependability. The RPMs provide companies with the opportunity toconduct their activities more effectively and efficiently, especially with respect to Enter-prise Information System processes. Those standardized approaches serve as an idealizedmodel and have to be specifically adjusted for every given use case.

Problems arise when evaluating the dependability of Enterprise Information Systemprocess organization in practice. On one hand, we would like to have mathematicalmodels that precisely determine the reliability or availability of individual components.However, because of their complexity and level of detail those models are difficult toderive and frequently not very useful for most industrial systems. Furthermore, they arenot very useful because of lack of information on failure and repair rates of hardware,but mainly software components and the lack of satisfactory level of detail regardingdependability. On the other hand, generic RPMs for IT process organization have a highlevel of abstraction and, therefore, do not allow exact dependability evaluation.

This paper proposes the application of analytical methods to the reference processmodels-like organizational approaches in order to quantitatively evaluate Enterprise In-formation Systems. We accomplished it by systematically analyzing Enterprise Infor-mation System processes within the RPMs to derive the processes that are relevant todependability (Goldschmidt et al. 2009). On the basis of developed knock-out criteriaand a proposed evaluation matrix, we are able to judge the significance of individualindicators for the overall dependability of Enterprise Information Systems. Thereby, anindicator is a metric that is used to help manage a process, IT service or activity. Manymetrics may be measured, but only the most important of them are defined as indicatorsand used to actively manage and report on the process, IT service or activity. Indicatorsshould be selected to ensure that efficiency, effectiveness, and cost effectiveness are allmanaged. Each indicator is quantified by a single number, called the indicator rank (IR).The IR allows ranking of significance of each indicator with respect to availability ofEnterprise Information Systems while masking the complexity of its assessment.

The rest of the paper is organized as follows. In Section 2 we state the problem of quan-titatively assessing and enhancing Enterprise Information System processes and describeexisting approaches. Section 3 presents nine relevant processes of ITIL R© with respectto availability of Enterprise Information Systems. An analytical approach is proposed inSection 4 to answer an important question: Which indicators are more important thanothers for enhancing availability of Enterprise Information Systems? Section 5 presentsresults from a case study. We draw conclusions in Section 6.

2. Basic Problem, Related Work and Approaches

Currently, there exists no scientifically established method for availability enhancementof Enterprise Information Systems which uses systematically identified and ranked keyperformance indicators. This paper is most likely the first attempt in this directionalthough there were several works indicating such need in general (Debreceny and Gray2009, Simonsson and Johnson 2008).

As an important milestone, the examination of the RPMs IT Infrastructure Library(ITIL R©) and the Control Objectives for Information and Related Technology (CobiT R©)to derive a quantifiable concept for estimating the criticality of dependability-relatedEnterprise Information System processes in CobiT R© is described in (Goldschmidt et al.2009). After systematically analyzing ITIL R© processes and deriving properties that are

Page 4: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

4

relevant to dependability, those processes are mapped onto CobiT R© processes. Further-more, a process criticality index (PCI) is proposed which reflects the significance of eachdependability-related process within a particular RPM. The PCI is based on the graphtheory concept of betweenness centrality and uses a directed graph where nodes repre-sent dependability-related processes and edges relations among them. Finally, using cycleand sequence analysis it is possible to show the relative significance of each process. Thisprovides an efficient strategy for identifying most significant processes, according to theranking based on the PCI and implementing them first for the highest availability gain.

An integration of two RPMs ITIL R© and CMMI R© is the CMMI R© for IT operations(CITIL R©) (wibas 2007). It provides an integrated model which addresses IT operationsand IT development with a common and consistent approach. The combination of bothmodels in one single framework supports the improvement of both the development andthe operation aspects of IT products and services. Additionally, CITIL R© describes theinterfaces between IT development and IT operations and enables a common under-standing of the corresponding activities. So IT development and IT operations teamscan improve their performance together. CITIL R© has been developed with the supportof the Office of Government Commerce and the Software Engineering Institute.

Another approach is reflected by CMMI for Services R©, which extends the coverageof the CMMI R© from development and acquisition into service delivery (CMMI ProductTeam 2010). CMMI for Services R© promises to improve consistency and payoff and providefuller coverage for process areas necessary to services that are not covered by the currentCMMI R© models. In contrast to it, CITIL R© is not a new model, but rather an integrationof the established ITIL R© in CMMI R© with full conformance to ITIL R© and CMMI R©.CITIL R© deals specifically with the IT operation, while CMMI for Services R© covers alltypes of services. CITIL R© and CMMI R© complement each other well and together resultin a model, which covers all activities of an Enterprise Information System.

At present, there exist tools for qualitative self-assessment of Enterprise InformationSystem, such as SPiCE Lite [ITSM] (IT service management). It was developed by Ne-hfort IT Consulting KEG in cooperation with HM&S, SynSpace and TU Graz (Nehfort2007, Steinmann and Stienen 2002). It works on the basis of processes of the ITIL R© RPMwith focus on service support and service delivery. In a nutshell, SPiCE Lite [ITSM]supports the guided assessment of ITIL R© Enterprise Information System processes.SPiCE Lite [ITSM] applies its own maturity level model to ITIL R© processes. It thusprovides a qualitative evaluation of process maturity in accordance to the SPiCE-processmaturity model (ISO/IEC 15504) (ISO/IEC 2003). The tool consists of a preparatorypart and an examination part. The preparatory part contains 37 questions referring tothe different processes. It does not only cover the whole software development processbut also the complete IT service management processes. Different process attributesare requested in each question. The auditor assigns a completion level between 0% and100% or, respectively, the completion level N (not achieved from 0% to 15%), P (partlyachieved from 16% to 50%), L (largely achieved from 51% to 85%) or F (fully achievedfrom 86% to 100%). This tool is one of the first approaches which provides a qualitativeevaluation of Enterprise Information System ITIL R© processes by a process mapping toanother generic RPM.

Page 5: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

5

3. Selection of Processes with Respect to Enterprise InformationSystem Availability on the Basis of ITIL R©

As mentioned in Section 2, there are currently no possibilities of quantitative availabil-ity evaluation and enhancement of Enterprise Information Systems by systematicallyidentified and ranked indicators. In this section, we propose a systematic method forselecting Enterprise Information System processes with respect to availability evaluationand enhancement.

Enterprise Information System processes are implemented based on established stan-dards. Therefore, it is possible to explore the generic RPMs with the goal of identifyingevery management process that concerns dependability enhancement. The selection ofall these management processes was carried out as follows. First, dependability-relatedprocesses were systematically identified in ITIL R© Version 3. The constant updating ofthe ITIL R© RPM at the beginning of 2007 finally led to a restructuring and a new de-velopment. The central process of service delivery and service support of ITIL R© Version2 were transferred unchanged to ITIL R© Version 3. On the basis of the mapping and theprecise analysis which was described in detail in (Goldschmidt 2009, Goldschmidt et al.2009) the focus is on nine central processes with respect to dependability of the ITIL R©

Version 3 RPM. These nine dependability-related processes can be subdivided into

• processes with direct impact, and• processes with indirect impact

on availability enhancement of Enterprise Information Systems. A description of thesenine central processes is given in Tables 1 and 2.

Table 1. Primary Processes with direct impact on availability enhancement Enterprise InformationSystems (OGC 2007)

Identified Processes Description

Availability Management (AM) The process is responsible for defining, analyzing, planning, mea-suring and improving all aspects of the availability of IT services.AM is responsible for ensuring that all IT infrastructure, pro-cesses, tools, roles etc. are appropriate for the agreed servicelevel targets for availability.

Change Management (CHM) The primary objective of CHM is to enable smooth changes, tobe made with minimum disruption to IT services.

Incident Management (IM) The primary objective of IM is to return the IT service to usersas quickly as possible.

IT Service Continuity Management(ITSCM)

The process is responsible for managing risks that could seri-ously impact IT services. ITSCM ensures that the IT serviceprovider can always provide minimum agreed service levels, byreducing the risk to an acceptable level and planning for therecovery of IT services.

Page 6: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

6

Table 2. Secondary Processes with indirect impact on availability enhancement of Enterprise Informa-tion Systems (OGC 2007)

Identified Processes Description

Capacity Management (CAM) The process is responsible for ensuring that the capacity of ITservices and the IT infrastructure is able to deliver agreed ser-vice level targets in a cost effective and timely manner. CAMconsiders all resources required to deliver the IT service, andplans for short, medium and long term business requirements.

Configuration Management (CM) The process is responsible for maintaining information aboutconfiguration items (CI) required to deliver an IT service, includ-ing their relationships. This information is managed throughoutthe lifecycle of the CI.

Problem Management (PM) The primary objectives of PM are to prevent incidents fromhappening, and to minimize the impact of incidents that cannotbe prevented.

Release Management (RM) The primary objective of RM is to ensure that the integrity ofthe live environment is protected and that the correct compo-nents are released.

Service Level Management (SLM) The process is responsible for negotiating service level agree-ments and ensuring that these are met. SLM is responsible forensuring that all IT service management processes, operationallevel agreements and underpinning contracts are appropriate forthe agreed service level targets. SLM monitors and reports onservice levels, and holds regular customer reviews.

3.1. Selection of similar Reference Process Models on the Basis ofProcess Mapping

In this subsection, we propose a holistic approach with the goal of identifying similargeneric RPMs to enhance the process definition focus of ITIL R© Version 3 in directionof process requirement and process improvement to cover all aspects of availabilityenhancement of Enterprise Information Systems. Therefore, it is necessary to explorethe generic RPMs with the goal of identifying every management process that concernsdependability.

Definition 3.1: Process MappingA process mapping is identifying a content-wise relations of a process of a genericreference process model onto the respective suitable process of corresponding contents ofanother generic reference process model.

For processes, which possess this characteristic, a mapping exists.A holistic approach is based on the mapping of identified processes onto RPMs with

a similar characteristics with respect to availability of Enterprise Information Systems.This is an important criterion for exclusion. Thus, the selected RPMs could be com-paratively analyzed and the possibility of an efficient process mapping exist. CobiT R©,CITIL R©, CMMI R©, MOF as well as SPiCE Lite [ITSM] were selected on the basis of theirmappings on the identified processes as well as their awareness level. All at present avail-

Page 7: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

7

able important RPMs use sections of CMM(I) R© and/or ITIL R© (see Figure 1). Therebythe dominant position of CMM R© and ITIL R© is clear. RPMs as well as ISO standardscan be subdivided into three areas of validity (see Figure 2):

• IT management,• IT engineering and• IT operation.

Furthermore, it is also possible to subdivide RPMs on the basis of their main focuses:

• Process definition,• Process requirement and• Process improvement.

Beyond that, by the good coverage of the respective RPMs a process mapping ispossible. Target of all regarded RPMs is to point out alternatives as well as optionsabout WHAT (IT service management solutions) is to be done. Each enterprise have todecide by themselves, HOW the alternatives and options are to be applied. Consequently,RPMs only point out, WHAT needs to be done. For this reason it is necessary to includefurther RPMs to answer the question regarding the HOW. Due to the holistic approachto the identified processes over several RPMs one can make more precise statementsabout availability enhancement of Enterprise Information Systems.

CobiT R© maps up to 85% (29 out of 34 processes) onto the nine identified processes.This can be viewed as indication of the critical significance of these nine processes.Hence, it belongs to the generic RPMs covering the whole field of IT management,engineering and operation. In total 72 processes of the six selected RPMs with impacton dependability were selected on the basis of this process mapping to cover all aspectsof availability enhancement of Enterprise Information Systems (see Table 3).

4. Selection and Ranking of Indicators with respect to Availability ofEnterprise Information Systems

An extensive set of indicators supports the introduction of a comprehensive framework forProcess Controlling. Defining and selecting suitable indicators is above all about decidingwhat exactly is considered a ‘successful process execution’. Once this is established itbecomes possible to determine and measure specific indicators. Process Owners andControllers are thus in a position to evaluate the quality of their processes, which inturn is the basis for the ongoing optimization and fine-tuning of the process designs.The selection of suitable indicators will, among other things, depend on the possibilitiesto actually measure the indicators. The indicators and the corresponding measurementprocedures are therefore an important input for system requirements. The suggestedindicators comply with the ITIL R© v3 recommendations and were enhanced with elementsfrom CITIL R©, CobiT R©, CMMI R©, MOF and SPiCE Lite [ITSM] (see Table 3).

At present, two different types of indicators exist. On one hand so-called Key GoalIndicators (KGIs) and on the other hand Key Performance Indicators (KPIs). KPIs arequantifiable indicators which measure certain business objectives. Both give informationabout the achievement level of business objectives by IT. Performance and target indica-tors can generally be differentiated. Here the focus is on effectiveness and efficiency. KGIsfocus on effectiveness (‘do the right thing’), KPIs give information about the success ofthe strategy of implementation (‘do it correctly’). A further advantage of indicators is

Page 8: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

8

act o

n a

pro

posa

l of a

matu

rity m

odel a

nd d

evelo

p a

ca

pability

model

Taken o

ver ca

pability

model

ITIL v2 taken over completely

Taken over maturity and capability model as well as

some processes

modifie

d ta

ken o

ver m

atu

rity m

odel

Form the evaluation basis

ITIL

v2 p

roce

sses (S

erv

ice S

upport, S

erv

ice D

eliv

ery

) taken o

ver fo

r serv

ice m

anagem

ent

Form

a p

art o

f the b

asics

Form

the b

asics

Taken over maturity model

Form

s a p

art o

f the b

asis

Basis for process model vocabulary and fundamental concepts

vocabulary and fundamental concepts

19

85

– 1

99

01

99

1 –

19

96

19

97

- 20

02

20

03

– 2

00

8

[Engineering]

CMMIv1.0 (2000)v1.1 (2002)v1.2 (2003)V1.3 (2010)

[Engineering]

SPICE1995 first draft

ISO/IEC 15504-1:2004ISO/IEC 15504-2:2003ISO/IEC 15504-3:2004ISO/IEC 15504-4:2004ISO/IEC 15504-5:2006

ISO/IEC TR 15504-6:2008ISO/IEC TR 15504-7:2008

[Engineering/Operation]

SPICE Lite [ITSM](2003)

[Management/

Engineering/Operation]

CobiTv1.0 (1996)v2.0 (1998)v3.0 (2000)v4.0 (2005)v4.1 (2007)

[Engineering/Operation]

MOFv1 (1999)v3 (2002)v4 (2008)

[Engineering/Operation]

CITILv1 (2007)

[Engineering]

CMMv1.0 (1991) 1986 start

v1.1 (1993)v2.0 (1997) withdrawn

[Engineering]

ISO 12207ISO/IEC 12207:1995

[Management/

Engineering/Operation]

ISO 9000start (1994)ISO 9000:2000ISO 9001:2000ISO 9004:2000ISO 19011:2002ISO 9000:2005ISO 9001:2007ISO 9001:2008

ISO 9001 for quality management

[Operation]

MITO(2003)

Code of Practice for Service Management

Code of Practice for Service Management

Basis

Repla

ced b

y

[Engineering/Operation]

ITILv1 (1986)v2 (2002)v3 (2007)

ITIL

v3 re

fers to

ITIL v3 refers to

Allocating of important ITIL v2 processes on maturity levels

Refers to

[Management/Engineering/Operation]

ISO/IEC 27002BS 7799:1995

ISO 17799:2000ISO 27002:2005

[Operation]

ISO/IEC 20000BS 15000-1:2002 “Standard”BS 15000-2:2003 “Code  of  

Practice”  ISO 20000:2005

Refe

rs to

Refers to

[Management/Engineering/Operation]

ISO/IEC 27001BS 7799-2:2002 ISO 27001:2005

DIN ISO/IEC 27001:2008

Refe

rs to

Refe

rs to

Figure 1. Chronology of development and relations among most important generic referenceprocess models reflected on the time line.

Page 9: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

9

Sco

pe

Main focus

IT m

anag

emen

tIT

en

gin

eeri

ng

IT o

per

atio

n

Process definition. Process requirement Process improvement

CobiT ISO 9001

ISO 27002

ISO 12207

ITIL v3

CMMI v1.3

CITIL v1.0MITO

MOF v4

SPICE

Figure 2. Classification of reference process models.

that the actual condition of an enterprise can be measured and possible improvementscan be identified in order to achieve a given specified improvement.

In a nutshell, indicators are used to assess the processes of an Enterprise Informa-tion System, specifically, to quantify the availability status of the identified nine centralprocesses given in Table 1.

A set of approximately 3000 indicators of the individually analyzed generic RPMsrely on the experience of process managers and process owners (persons responsiblefor a process or a subprocess). An alternative to this process would be to use one offeature selection methods (Liu and Yu 2005). Starting point for the establishment ofindicators in the represented RPMs is the need for information from the management. Inaddition to the measuring target and posing suitable questions, relevant metrics shouldbe selected. The need for specific indications should be established by the analysis ofthe measured values. Best practices form the good base for choice of indicators. On thebasis of these established empirical values it is possible to determine how strongly therespective process was incorporated. These indicators as well as strong relations amongthe introduced RPMs form the starting point for availability enhancement of EnterpriseInformation Systems.

4.1. Selection of Indicators with respect to Availability ofEnterprise Information Systems

Because of the plentitude of redundancies and the wide-ranging specification of the col-lected set of indicators in the first step, a reduction with developed knock-out criteriashould take place. These knock-out criteria are developed in accordance with the valuebenefit analysis and can be divided into seven areas, found in Table 4 with correspond-

Page 10: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

10

Table

3.

Mappin

gta

ble

of

the

pri

mary

and

seco

ndary

pro

cess

esw

ith

hig

hand

indir

ect

impact

on

availabilit

yen

hance

men

tof

Ente

rpri

seIn

form

ati

on

Syst

ems

Identi

fied

Pro

cess

es

ITIL

R ©v3

CM

MI

R ©

v1.3

CIT

ILR ©

v1.0

CobiT

R ©v4.1

MO

Fv4.0

SP

iCE

Lit

e[I

TSM

]v1.0

AM

AM

RE

QM

,P

P,

PM

CA

MP

O2,

PO

3,

PO

4,

PO

8,

PO

9,

AI3

,D

S3,

DS11,

ME

1R

elia

bilit

ySM

FA

M

CH

MC

HM

CM

CH

MP

O4,

PO

8,

PO

9,

PO

10,

AI2

,A

I3,

AI4

,A

I6,D

S12,D

S13,M

E1,M

E2,

ME

3,

ME

4

Change

and

Configura

-ti

on

SM

FC

HM

for

ITSer

vic

eSupp

ort

IMIM

XIM

PO

4,

PO

8,

DS8,

ME

1,

ME

2C

ust

om

erSer

vic

eSM

FIM

ITSC

MIT

SC

MR

SK

MIT

SC

MP

O3,

PO

4,

PO

8,

PO

9,

DS4,

ME

2R

elia

bilit

ySM

FSer

vic

eC

onti

nuit

yM

anagem

ent

CA

MC

AM

RE

QM

,P

P,

PC

MC

AM

PO

3,

PO

4,

PO

7,

PO

8,

PO

9,

AI1

,A

I3,

DS3,

DS12,

ME

1R

elia

bilit

ySM

FC

AM

for

ITSer

vic

eD

eliv

ery

CM

CM

CM

CM

PO

4,

PO

8,

DS9,

DS12,

DS13,

ME

1,

ME

2,

ME

3,

ME

4C

hange

and

Configura

-ti

on

SM

FC

Mfo

rIT

Ser

vic

eSupp

ort

PM

PM

CA

RP

MP

O4,P

O8,P

O9,D

S10,M

E1,M

E2

PM

SM

FP

M

RM

RM

CM

RM

PO

4,

PO

8,

PO

9,

PO

10,

AI2

,A

I3,

AI4

,A

I5,

AI7

,D

S7,

ME

1,

ME

4D

eplo

ySM

FR

Mfo

rSer

vic

eSupp

ort

SL

MSL

MX

SL

MP

O4,

PO

6,

PO

8,

PO

9,

DS1,

ME

1,

ME

4B

usi

nes

s/IT

Alignm

ent

SL

M

Ab

brevia

tion

s:

•C

ob

iTR ©

PO2

-D

efi

ne

the

info

rm

ati

on

arch

itectu

re;PO3

-D

ete

rm

ine

tech

nolo

gic

al

dir

ecti

on

;PO4

-D

efi

ne

the

ITP

rocess

es,

Organ

izati

on

an

dR

ela

tion

ship

s;PO6

-C

om

mu

nic

ate

man

agem

ent

aim

san

dd

irecti

ons;

PO7

-M

an

age

IThu

man

reso

urces;

PO8

-M

an

age

Qu

ali

ty;PO9

-A

ssess

an

dm

an

age

ITR

isks;

PO10

-M

an

age

Proje

cts

;AI1

-Id

enti

fyau

tom

ate

dso

luti

on

s;AI2

-A

cqu

ire

an

dm

ain

tain

ap

plicati

on

soft

ware;AI3

-A

cqu

ire

an

dm

ain

tain

tech

nolo

gy

infr

ast

ru

ctu

re;AI4

-E

nab

leop

erati

on

an

du

se;AI5

-P

rocu

re

ITreso

urces;

AI6

-M

an

age

Ch

an

ges;

AI7

-In

stall

an

daccred

itso

luti

on

san

dch

an

ges;

DS1

-D

efi

ne

an

dm

an

age

Servic

eL

evels

;DS3

-M

an

age

Perfo

rm

an

ce

an

dC

ap

acit

y;DS4

-E

nsu

re

conti

nu

ou

sS

ervic

e;DS7

-E

du

cate

and

train

use

rs;

DS8

-M

an

age

Servic

eD

esk

an

dIn

cid

ents

;DS9

-M

an

age

the

Con

figu

rati

on

;DS10

-M

an

age

Prob

lem

s;DS11

-M

an

age

data

;DS12

-M

an

age

the

physi

cal

envir

on

ment;

DS13

-M

an

age

op

erati

on

s;ME1

-M

on

itor

an

dE

valu

ate

ITP

erfo

rm

an

ce;ME2

-M

on

itor

an

dE

valu

ate

inte

rn

al

contr

ol;ME3

-E

nsu

re

regu

lato

ry

com

pli

an

ce;ME4

-P

rovid

eIT

govern

an

ce

•IT

ILR ©

,C

ITIL

R ©,

CM

MI

R ©,

MO

Fan

dS

PiC

EL

ite

[IT

SM

]AM

-A

vail

ab

ilit

yM

an

agem

ent;

CHM

-C

han

ge

Man

agem

ent;

IM

-In

cid

ent

Man

agem

ent;

ITSCM

-IT

Servic

eC

onti

nu

ity

Man

agem

ent;

CAM

-C

ap

acit

yM

an

-agem

ent;

CM

-C

on

figu

rati

on

Man

agem

ent;

PM

-P

rob

lem

Man

agem

ent;

RM

-R

ele

ase

Man

agem

ent;

SLM

-S

ervic

eL

evel

Man

agem

ent;

REQM

-R

equ

irem

ents

Man

agem

ent;

PP

-P

roje

ct

Pla

nn

ing;PMC

-P

roje

ct

Mon

itorin

gan

dC

ontr

ol;

RSKM

-R

isk

Man

agem

ent;

CAR

-C

au

sal

An

aly

sis

an

dR

eso

luti

on

Page 11: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

11

Table 4. Knock-out criteria

Number Knock-out criteria Brief descrip-tion

Examples on the basis ofchange management (ITIL R©

v3)

KO 1 The indicator refers to En-terprise Information System,which is outside of the normaloperational specification (e.g.emergency). Y/N

Indicator is out-side of specifica-tion

Percentage of total changes, whichare related to emergency fixes

KO 2 The expected effort for data ac-quisition of the indicator is dis-proportionate. Y/N

High effort ondata collection

Number of the interrupts or dataerrors, which are caused by inac-curate specification or incompleteevaluation of the effects

KO 3 Privacy issues (extreme require-ments at privacy). Y/N

Data collectionis not possible

Name, position and contact infor-mation of the change initiator

KO 4 The indicator is not relevant fora particular organization. Y/N

Non transparentindicator

Percentage of only cost-conformingchanges

KO 5 The indicator does not giveapparently contribution to theevaluation of availability of En-terprise Information System.Y/N

Irrelevant indi-cator

Degree of satisfaction of the seniormanagement with report of internalcontrol monitoring

KO 6 The data basis for the indica-tor occurs rarely, or can only bequantified or estimated rarely(rare event problems). Y/N

Rare event Percentage of the projects that ful-fill stakeholder expectations (time-liness, within the budget and re-quirements)

KO 7 An indicator does not add anyextra value beyond an alreadyexisting indicator. Y/N

No value added Number of non timely changes ver-sus number of changes within timeestimation

ing examples (Zangemeister 1976, Keeney and Raiffa 1993). All indicators which do notpass one or more knock-out criteria drop out from further analysis which means a totalreduction of 88% (in case of an 100% reduction, another RPM with focus on availabilityshould be applied). Due to this selection, the set of indicators can be significantly reducedand a focus on the dependability aspect of Enterprise Information System is thereforefeasible and successfully accomplished. On the basis of this uniform reduction of the setof indicators regarding the target objective, it is possible to rank the significance of thesedependability-related indicators in a next step.

4.2. Calculation of Indicator Rank (IR)

In order to illustrate the selection procedure of the indicators and their evaluation wechoose the change management process. Every indicator is evaluated on the basis ofthe developed evaluation matrix (see Figure 3) and is ranked with respect to availabil-ity enhancement potential of Enterprise Information Systems. The evaluation matrix iscreated in accordance with the value benefit analysis (Zangemeister 1976, Keeney andRaiffa 1993).

Page 12: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

12

Fig

ure

3.E

valu

atio

nm

atri

xby

the

exam

ple

ofth

eC

ob

iTR ©

(AI3

)in

dic

ato

r‘N

um

ber

of

infr

ast

ruct

ure

com

pon

ents

that

are

no

longer

sup

port

ab

le(o

rw

ill

not

be

inth

en

ear

futu

re)’

.

Bewe

rtungsm

atrix

Seite

/1

0"Poin

ts1"P

oint

3"Poin

ts5"P

oints

16,67

%ma

x."50/3

1."1"Q

uantita

tively

C"1.1.

1The/ind

icator/is

/based/o

n/com

preh

ensib

le/me

asurem

ents/or/stati

stics/

of/pe

rsonn

el,/so

ftware

,/hard

ware/(age,/fa

ilures,/throughp

ut,/erro

r/rat

e)The/s

tatem

ent/is

/not/correct

The/s

tatem

ent/is

/partially/c

orrect

The/s

tatem

ent/is

/pred

omina

ntly/corre

ctThe/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt2,2

25

11,11

C"1.2.

1The/ind

icator/is

/based/o

n/qualita

tive/e

stima

tions/of

/the/p

ersonn

el,/

softw

are,/hard

ware/(age,/fa

ilures,/throughp

ut,/erro

r/rate

)/and

/refer

s/to/av

ailabilit

yThe/s

tatem

ent/is

/not/correct

The/s

tatem

ent/is

/partially/c

orrect

The/s

tatem

ent/is

/pred

omina

ntly/corre

ctThe/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt1,1

13

3,33

C"1.2.

2The/ind

icator/refe

rs/to/se

curity/(sta

ndard

s)/[0/

or/5]

/or/re

fers/to/

integrity/a

nd/or/p

rivacy

The/s

tatem

ent/is

/not/correct

The/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt1,1

10

0,00

Sum"

F"13,3

314,44

16,67

%ma

x."50/3

The/ind

icator/refe

rs/to/re

dund

ancy/at

/an/or

ganiz

ation

alFpe

rsonn

el/lev

elThe/s

tatem

ent/is

/not/correct

The/s

tatem

ent/is

/partially/c

orrect

The/s

tatem

ent/is

/pred

omina

ntly/corre

ctThe/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt3,3

30

0,00

The/ind

icator/refe

rs/to/re

dund

ancy/at

/all/le

vels/(ex

cept/

perso

nnel/

organiz

ation

al)The/s

tatem

ent/is

/not/correct

The/s

tatem

ent/is

/partially/c

orrect

The/s

tatem

ent/is

/pred

omina

ntly/corre

ctThe/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt3,3

30

0,00

Sum"

F"23,3

30,0

0

16,67

%ma

x."50/3

The/ind

icator/refe

rs/to/m

ainten

ance

The/s

tatem

ent/is

/not/correct

Main

tenance/is/indir

ectly

/addressed

qualitat

ive/de

script

ion/(n

ot/cu

mulat

ive)q

uantita

tive/v

alue/(

not/cum

ulativ

e)1,1

15

5,56

The/ind

icator/refe

rs/to/m

onito

ring

The/s

tatem

ent/is

/not/correct

Mon

torin

g/only

/refer

s/to/t

echn

ique

Mon

torin

g/refe

rs/to/or

ganiz

ation

al/and/

perso

nnel/are

asMon

itorin

g/refe

rs/to/ce

rtain/processes

1,11

55,5

6

The/ind

icator/refe

rs/to/pr

ecautio

nThe/s

tatem

ent/is

/not/correct

Area/of

/docume

ntati

onArea/of

/emergency/m

anagem

ent

Area/of

/tests

,/backups

1,11

11,1

1Sum"

F"33,3

312,22

50,00

%ma

x."50

The/ind

icator/d

escribe

s/the

/recovery

The/s

tatem

ent/is

/not/correct

Recovery/

is/ind

irectly/a

ddresse

dInd

icator/refe

rs/to/SL

AsTim

e/is/d

irectly/r

eque

sted

3,33

13,3

3

The/ind

icator/refe

rs/to/th

e/roo

t/cause/an

alysis

The/s

tatem

ent/is

/not/correct

The/s

tatem

ent/is

/partially/c

orrect

The/s

tatem

ent/is

/pred

omina

ntly/corre

ctThe/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt3,3

33

10,00

The/ind

icator/refe

rs/to/th

e/process/du

ring/t

he/re

covery/

(Policies

...)The/s

tatem

ent/is

/not/correct

The/s

tatem

ent/is

/partially/c

orrect

The/s

tatem

ent/is

/pred

omina

ntly/corre

ctThe/s

tatem

ent/a

pplies/to/t

he/fu

ll/exte

nt3,3

35

16,67

Sum"

F"410

,0030,00

56,67

"achie

ved

100"t

o"achiev

e

Weig

hting

C"3.1

Targe

t"degree"o

f"com

pletio

n

C"2.2

MTTR

C"3.2

C"3.3

F"4:"The

"indic

ator"refe

res"to"t

he"ar

ea"re

covery.

MTTF

C"4.1

C"4.2

1.2"Qua

litativ

ely

F"2:"The

"indic

ator"refe

rs"to"th

e"area"r

edun

dancy.

C"2.1

Numb

erCrite

ria

Total

"score

Score"F

1:F4

Total

"evalu

ation

F"3:"The

"indic

ator"refe

rs"to"th

e"area"e

rror"d

etectio

n/tro

ubles

hooting

/erro

r"com

pensati

on.

Single

"point

"nu

mber

C"4.3

F"1:"The

"indic

ator"refe

rs"to"th

e"area"r

eliab

ility.

Page 13: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

13

In general steady-state availability Ass is defined as

Ass = 1− downtime

lifetime=

uptime

lifetime=

MTTF

MTTF + MTTR(1)

where MTTF and MTTR stand for mean time to failure and mean time to recovery (orrepair), respectively. In a nutshell, increasing MTTF and decreasing MTTR results inhigher availability.

Definition 4.1: Availability of Enterprise Information SystemAvailability of Enterprise Information System corresponds to the fulfillment level of theorganization’s mission represented by processes which are measured by preselected andranked indicators.

Indicators influencing availability can be divided into the main areas MTTF and MTTR-related factors respectively. They are further divided into four factors:

• Reliability (related to MTTF)• Redundancy (related to MTTF)• Error detection, troubleshooting and error compensation (related to MTTF) and• Recovery (related to MTTR).

All these four factors were divided into a number of criteria. These criteria can beweighted by their target degree of completion. The assignment of values to the factorsand criteria with respect to value benefit analysis can be found in Figure 4. There arefour steps necessary to quantify the potential of an indicator with respect to availabilityenhancement of Enterprise Information Systems:

Step 1: Select main areaStep 2: Select factorStep 3: If the factor ‘reliability’ was applied, select the assignment areaStep 4: Select the target degree of completion and continue with Step 1.

Both of the main evaluation areas MTTF and MTTR were weighted equally because atpresent it is not clear which one has to be prioritized. Overall, a normalized maximalscore of 100 can be reached which has been divided equally into MTTF and MTTR eachwith 50 points. The reliability factor is subdivided into the assignment areas

• ‘quantitatively’ and• ‘qualitatively’,

whereby ‘quantitatively’ is usually weighted higher. The reason for the prioritization ofindicators with a quantifiable result is that these values allow more precise statementsabout the system that has to be evaluated than indicators with a qualitative value.Furthermore, the evaluation area ‘qualitatively’ has to be subdivided into antivalence(EXOR) criteria

• ‘the indicator is based on qualitative estimations of the personnel, software, hard-ware (age, failures, throughput, error rate) and refers to availability’ and• ‘the indicator refers to security (standards) or refers to integrity and/or privacy’

whereas only the evaluation of one of the criteria is acceptable. With this classification

Page 14: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

14

IndicatorRank

-------------------------------

max.

100P

oints

MTTF

-------------------------------

max.

50Po

ints

MTTR

-------------------------------

max.

50Po

ints

Reliability

-------------------------------

max.

16,67

Point

s

Redundancy

-------------------------------

max.

16,67

Point

s

ErrorDetection,…

-------------------------------

max.

16,67

Point

s

Recovery

-------------------------------

max.

50Po

ints

Quantitatively

-------------------------------

max.

11,11

Point

s

Qualitatively

-------------------------------

max.

5,56P

oints

C1.1.1

---------------------------

max.

11,11

Point

s

C1.2.1

---------------------------

max.

5,56P

oints

C1.2.2

---------------------------

max.

5,56P

oints

EXOR

C2.1

---------------------------

max.

16,67

Point

s

C2.2

---------------------------

max.

16,67

Point

sEX

ORC3.1

---------------------------

max.

5,56P

oints

C3.2

---------------------------

max.

5,56P

oints

C3.3

---------------------------

max.

5,56P

oints

C4.1

---------------------------

max.

16,67

Point

s

C4.2

---------------------------

max.

16,67

Point

s

C4.3

---------------------------

max.

16,67

Point

s

50%

50%

33%

33%

33%

100%

66,67

%33

,33%

100%

100%

100%

100%

100%

33,33

%33

,33%

33,33

%33

,33%

33,33

%33

,33%

StepOne StepTwo StepThree StepFour

0;1;

3or5

0;1;

3or5

0or5

0;1;

3or5

0;1;

3or5

0;1;

3or5

0;1;

3or5

0;1;

3or5

0;1;

3or5

0;1;

3or5

0;1;

3or5

Targetdegreeofcompletion

Fig

ure

4.A

ssig

nm

ent

of

valu

esto

evalu

ati

on

matr

ixw

ith

resp

ect

tova

lue

ben

efit

an

aly

sis

Page 15: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

15

the indicators with respect to security, personnel, software and hardware could be rankedseparately. The next factor ‘redundancy’ is subdivided into EXOR criteria

• ‘the indicator refers to redundancy at an organizational/personnel level’ and• ‘the indicator refers to redundancy at all levels (except personnel/organizational)’

to evaluate the indicators separately which follow an Enterprise Information System orIT personnel aspect. The last MTTF evaluation point ‘error detection, troubleshootingand error compensation’ has to be subdivided in the assignment criteria

• ‘the indicator refers to maintenance’,• ‘the indicator refers to monitoring’ and• ‘the indicator refers to precaution’.

The indicators which refer to maintenance can be framed into

• ‘qualitative description’ and• ‘quantitative value’

whereby qualitative indicators may be granted up to 3 points and quantitative indicatorsup to 5 points. Consequently, quantitative indicators are ranked higher. Indicators with

Table 5. Indicator rank (IR) of the 16 most important Change Management process indicators withrespect to availability enhancement of Enterprise Information Systems (for abbreviations see Table 3).

Basic RPM Identified Process Indicator IR

CobiTR© AI3 Number of infrastructure components that are nolonger supportable (or will not be in the near fu-ture)

56,67

CobiTR© DS13 Hours of unplanned down time caused by opera-tional incidents

47,78

ITILR© CHM Proportion of successful back-out planning 47,78

CobiTR© DS12 % of personnel trained in safety, security and fa-cilities measures

45,56

SPICE Lite CHM (Secure Changes) % of changes, which influences SLA’s 45,56SPICE Lite CHM (Secure Changes) % of missed changes, with no existing back-out

plan44,44

CobiTR© PO8 % of defects uncovered prior to production 41,11CobiTR© DS13 Number of training days per operations personnel

per year37,78

CobiTR© DS12 Frequency of physical risk assessment and reviews 37,78

ITILR© CHM Number of errors caused by changes 35,56

SPICE Lite CHM (Secure Changes) % of changes changes not assessed by the CAB 35,56SPICE Lite CHM (Fast Changes) % of urgent changes caused by incidents 35,56SPICE Lite CHM (Repeatable Pro-

cess)% of changes, which are necessary due to previouschange errors

35,56

CobiTR© DS13 Number of service levels impacted by operationalincidents

34,44

ITILR© CHM Number of successfully implemented changes 34,44ITILR© CHM Number of RFCs, which where tested prior to im-

plementation34,44

Page 16: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

16

respect to ‘monitoring’ can be framed into their level of monitoring (technical level, orga-nizational and personnel level or process level). Thereby, such indicators which monitorcertain processes are given more points. Only indicators which refer to the last criteriaof ‘error detection, troubleshooting and error compensation’ ‘precaution in the area oftests and backups’ can be granted the maximum points. The MTTR evaluation factor‘recovery’ is subdivided in three assignment areas

• ‘the indicator describes the recovery’,• ‘the indicator refers to the root cause analysis’ and• ‘the indicator refers to the process during the recovery (policies...)’.

Each indicator can get a maximum of 5 points regarding its target objective. In a followingstep these points are multiplied by the corresponding weight of the respective criterion.Finally, the indicator rank is determined by summation of each respective factor. Thereason for such classification in the areas of MTTF and MTTR is that only such indicatorswhich refer to the area availability enhancement of Enterprise Information Systems canattain the highest score.

The indicator rank describes the significance of an indicator with respect to availabilityenhancement of Enterprise Information Systems. Since every relevant indicator, whichpassed the knock-out criteria (see Table 4), went through a consistent evaluation matrixand hence was evaluated consistently regarding the target objective, it is now possibleto quantify the impact of each analyzed indicator from each selected RPM to assessand enhance availability of Enterprise Information System. Table 5 shows the indicatorrank of the 15 most important indicators which map to the primary process changemanagement with high impact on availability enhancement of Enterprise InformationSystems. It is not surprising, that through the mapping of the almost complete CobiT R©

RPM (29 out of 34 processes), the largest number of indicators came from CobiT R©. Dueto the holistic approach to the identified process over several RPMs one can see clearlythe additional important indicators which clearly show that more than one RPM isnecessary to make more precise statements about availability enhancement of EnterpriseInformation Systems.

5. Case Study

The correctness of the indicator rank IR was verified using a field study on processmaturity in two medium-size enterprises. The objective of this study was to demonstratethe first step in quantifying the dependability of Enterprise Information Systems based

Table 6. Source of evaluated indicators per identified process (for abbreviations see Table 3).

Process ITIL R© CobiT R© CMMI R© MOF SPiCE Lite [ITSM]

AM 21% 54% 2% 3% 20%CHM 19% 61% 4% 4% 12%IM 32% 39% 0% 5% 24%ITSCM 21% 58% 9% 8% 4%

CAM 19% 47% 9% 3% 22%CM 9% 72% 14% 3% 2%PM 16% 58% 3% 10% 13%RM 21% 41% 3% 9% 26%SLM 26% 39% 0% 11% 24%

Page 17: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

17

Table 7. Indicator rank (IR) of the 42 most important indicators with respect to availability enhancementof Enterprise Information Systems (for abbreviations see Table 3).

Rank Indicator IR

1 Average time for data restoration 66,672 Number of infrastructure components that are no longer supportable (or will not be

in the near future)56,67

3 % of critical infrastructure components with automated availability monitoring 52,224 Time between internal control deficiency occurrence and reporting 51,115 % of IT roles with qualified backup personnel 48,896 Time lag between the reporting of the deficiency and the action initiation 48,897 Hours of unplanned down time caused by operational incidents 47,788 Proportion of successful back-out planning 47,789 Number of critical business processes not covered by a defined service availability

plan47,78

10 Average response time of support requests (telephone and email) 47,7811 Downtime arising from physical environment incidents 45,5612 % of personnel trained in safety, security and facilities measures 45,5613 Average duration of incidents by severity 45,5614 % of incidents that require local support (field support, personal visit) 45,56

15 % of changes, which influences SLA’s 45,5616 % of missed changes, with no existing back-out plan 44,4417 Average time for resolving an incident, grouped into categories 44,4418 Frequency of testing of backup media 43,3319 Number of relevant infrastructure components with automated capacity monitoring 42,2220 % of defects uncovered prior to production 41,1121 Number of SLA injuries caused by inadequate service performance or by inadequate

service component performance41,11

22 % of incidents solved in time defined by SLAs 41,1123 Number of major Incidents 41,1124 Number of implemented measures with the objective of increasing availability 40,0025 Number of delayed business initiatives due to IT organizational inertia or unavail-

ability of necessary capabilities40,00

26 % of roles with documented position and authority descriptions 40,0027 Number of cause and effect relations identified and incorporated in monitoring 40,0028 Does an Incident handle plan exist? 40,00

29 % of redundant/duplicate data elements 38,8930 Number of unplanned increases to service or component capacity as result of capacity

bottlenecks38,89

31 Number of incidents occurring because of insufficient service or component capacity 38,8932 % of incidents reopened 38,8933 Number of SLA injuries caused by support-contracts of third party suppliers 38,8934 % of IT services and infrastructure components under availability monitoring 38,8835 Average duration of IT-Service interruptions 38,6636 Number of training days per operations personnel per year 37,7837 Frequency of physical risk assessment and reviews 37,7838 % of IT staff receiving quality awareness/management training 37,7839 % of assets monitored through centralized tool(s) 37,7840 % of incidents resolved within agreed/acceptable period of time 37,7841 Number of injuries caused by the physical environment 35,5642 % of problems for which a root cause analysis was undertaken 35,56

on ranked indicators of qualitative generic RPMs. Table 6 shows the source of evaluatedindicators per identified process. A holistic approach was necessary to extend the baseof indicators with respect to dependability of Enterprise Information Systems. Finally,Table 7 shows the identified and ranked 42 most important indicators with very highimpact on availability enhancement of Enterprise Information Systems.

The study showed that it is not economically efficient and frequently infeasible for

Page 18: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

18 REFERENCES

an enterprise to measure every available indicator to evaluate the availability of theirEnterprise Information System. The proposed IR allows an enterprise to derive theirown specific most significant indicators. The ranked indicators provide good guidance toevaluate and to enhance the availability of Enterprise Information Systems.

6. Conclusions and Future Work

As quantifying qualitative dependability assessment and enhancement based on RPMssuch as CobiT R© remains a challenge, our objective in this paper was to quantify andrank the significance of dependability-related indicators.

Using nine identified ITIL R© processes, a process mapping on five other RPMs (CobiT R©,CMMI R©, CITIL R©, MOF and SPiCE Lite [ITSM]) was proposed. For all identified 72 pro-cesses, all available indicators were determined. In the first step, these collected indicatorswere pre-selected regarding the availability assessment and enhancement of Enterprise In-formation Systems on the basis of seven proposed knock-out criteria. In the second step,these pre-selected indicators were ranked on the basis of an evaluation matrix followingthe value benefit analysis which reflects the availability assessment and enhancementcapability of Enterprise Information Systems of each indicator.

The proposed approach allows more precise and objective evaluation of availability ofEnterprise Information Systems based on quantitative indicators and maturity levels ofRPMs. Using the proposed method, we can also analyze and rank any indicator of anygeneric RPM regarding availability of Enterprise Information Systems. It is our beliefthat with this comprehensive and transparent approach we can better understand issuesrelated to dependability and we can enhance dependability of Enterprise InformationSystem more effectively.

References

Hoffmann, G.A., Trivedi, K.S., Malek, M., 2007. A best practice guide to resource fore-casting for computing systems. IEEE Transaction on Reliability, December 2007.

Hoffmann, G.A. and Malek, M., 2006. Call Availability Prediction in a Telecommuni-cation System: A Data Driven Empirical Approach. Proceedings of the 25th IEEESymposium on Reliable Distributed Systems.

Milanovic, N. and Milic, B., 2011. Automatic Generation of Service Availability Models.IEEE Transactions on Services Computing, Vol. 4/2011, pp. 56-69.

Malek, M., Milic, B. and Milanovic, N., 2008. Analytical Availability Assessment of ITServices. Lecture Notes in Computer Science. Springer Berlin / Heidelberg, Vol.5017/2008, pp. 207-224.

ITGI, 2007. Control Objectives for Information and Related Technologies (CobiT) 4.1.IT Governance Institute.

OGC, 2007. ITIL Lifecycle Publication Suite. Version 3: Continual Service Improvement,Service Operation, Service Strategy, Service Transition, Service Design, 3. Ed. Sta-tionery Office.

CMMI Product Team, 2010. CMMI for Development. Version 1.3: Software EngineeringInstitute (SEI), November 2010.

Nehfort, A., 2007. SPiCE Assessments for IT Service Management according to ISO/IEC20000-1.

Page 19: Ranking of Dependability-Relevant Indicators for ... · Ranking of Dependability-Relevant Indicators for Availability Enhancement of Enterprise Information Systems Tobias Goldschmidt

REFERENCES 19

Steinmann, C. and Stienen, H. Enabling Software Process Improvement - Concepts andExperiences. March 2002.

Norton-Middaugh, B., Dyer, J., Henry, C., Lemmex, D. and Osborne, J., 2008. MicrosoftOperations Framework. Version 4.0: Microsoft, April 2008.

Debreceny, R. and Gray, G. L., 2009. IT Governance and Process Maturity: A FieldStudy. Proceedings of the 42nd Hawaii International Conference on System Sciences.

Simonsson, M. and Johnson, P., 2008. The IT Organization Modeling and AssessmentTool: Correlating IT Governance Maturity with the Effect of IT. Proceedings of the41st Hawaii International Conference on System Sciences.

wibas GmbH, 2007. CMMI for IT Operations v1.0 (CITIL = CMMI+ITIL). wibasGmbH, March 2007.

Goldschmidt, T., 2009. Entwicklung eines Modells fur die Verfugbarkeitsbewertung aufBasis generischer Referenzmodelle (eng.: Development of an Availability AssessmentModel Based on Generic Reference Models). Otto-von-Guericke Universitat Magde-burg, Diplomarbeit, January 2009.

Goldschmidt, T., Dittrich, A. and Malek, M., 2009. Quantifying Criticality ofDependability-Related IT Organization Processes in CobiT. Proceedings of the 15thIEEE Pacific Rim International Symposium on Dependable Computing.

CMMI Product Team, 2010. CMMI for Services. Version 1.3: Software Engineering In-stitute (SEI), November 2010.

ISO/IEC 15504-2:2003, 2003. Information Technology Process assessment - Part 2: Per-forming an assessment.

Liu, H. and Yu, L., 2005. Toward Integrating Feature Selection Algorithms for Classifica-tion and Clustering. IEEE Transactions on Knowledge and Data Engineering, Vol.17, No. 4, April 2005.

Zangemeister, C., 1976. Nutzwertanalyse in der Systemtechnik - Eine Methodik zur multi-dimensionalen Bewertung und Auswahl von Projektalternativen. Ph.D. Thesis, Tech-nische Universitat Berlin.

Keeney, R. and Raiffa, H., 1993. Decisions with Multiple Objectives: Preferences andValue Trade-Offs. Cambridge University Press, July 1993.