COMPOSABLE RISK-DRIVEN PROCESSES FOR ...csse.usc.edu/TECHRPTS/PhD_Dissertations/files/Yang... ·...
Transcript of COMPOSABLE RISK-DRIVEN PROCESSES FOR ...csse.usc.edu/TECHRPTS/PhD_Dissertations/files/Yang... ·...
COMPOSABLE RISK-DRIVEN PROCESSES FOR DEVELOPING
SOFTWARE SYSTEMS FROM COMMERCIAL-OFF-THE-SHELF (COTS)
PRODUCTS
by
Ye Yang
A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIAIn Partial Fulfillment of the
Requirements for the DegreeDOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
December 2006
Copyright 2006 Ye Yang
Dedication
To my parents.
ii
Acknowledgements
This dissertation could not have been completed without the support of many
hearts and minds. First and foremost, I would like to thank my PhD committee
members; in particular, my advisor Dr. Barry Boehm. I am deeply indebted to him
for his encouragement, advice, mentoring, and research support throughout my
academic program. I also truly appreciate his kindness and generosity which made
my study life more smoothly and enjoyable. My sincere thanks are also extended
to other committee members Dr. George Friedman, Dr. Ming-Deh Huang, Dr.
Nenad Medvidovic, and Dr. Stan Settles, for the invaluable guidance on focusing
my research and efforts on reviewing drafts of my dissertation.
I would also like to express my thanks and gratitude to several other people.
Dr. Daniel Port guided me in identifying potential empirical studies in the USC e-
services projects. Mr. Jesal Bhuta has been a wonderful friend and great co-worker
for several years. He has provided graciously of his time in helping me succeed.
Dr. Elizabeth Clark has been very generous and helpful to share the COCOTS data
and provide independent assessment in evaluating the performance of my work.
My special thanks to some other colleagues, including Dan Wu, Zhihao Chen, Yue
Chen, Ricardo Valerdi, Jo Ann Lane, the discussions with whom have greatly
benefited my study. I would also like to thanks Ms. Winsor Brown and Dr. Brad
Clark for their help and collaboration in my study.
I also wish to thank Prof. Mingshu Li from Institute of Software Chinese
Academy of Sciences, whose early mentoring and support have not only given me
iii
the exposure to the field of software engineering, but taught me a lot about
professional posture and inter-personal skills.
Lastly, from the bottom of my heart, I would like to thank my family for their
unconditional love and support during my study. My parents have inspired me to
always strive for the excellence since childhood. My brother, sister-in-law, and
niece, were always there every step of the way. Lei, my dear husband, your love is
the greatest gift of all, and I could not have done it without you! And my dearest
daughter, QianQian, who has just arrived to the world as I completed this
dissertation, you are my forever sunshine and strength!
iv
Table of Contents
Dedication ii
Acknowledgements iii
List of Tables viii
List of Figures xi
Abstract xiii
Chapter 1 Introduction 11.1 Motivation 11.2 Nature of the problem 31.3 Research Approach and Propositions 71.4 Dissertation Outline 9
Chapter 2 A Survey of Related Work 112.1 Processes for COTS-Based Development 11
2.1.1 COTS Selection Process 112.1.2 Waterfall-Based Lifecycle Processes 132.1.3 Spiral-Based Lifecycle Processes 152.1.4 Object-Oriented Process Models 17
2.2 Costing CBD 202.2.1 COCOTS 202.2.2 Others 22
2.3 Risks in CBD 222.4 Conclusions 24
Chapter 3 Composable Processes for Developing CBAs 253.1 Empirical Study Indications 25
3.1.1 Background 253.1.2 Quantitative CBA Growth Trend 263.1.3 COTS-Related Effort Distribution 273.1.4 Relating COTS Activity Sequences to Risks 29
3.2 Five principles for CBA development 333.2.1 Process happens where the effort happens 343.2.2 Don’t start with requirements 343.2.3 Avoid premature commitments, but have and use a plan 353.2.4 Buy information early to reduce risk and rework 363.2.5 Prepare for COTS change 37
3.3 CBA Process Decision Framework 383.3.1 Win Win Spiral Model and its Limitation 383.3.2 CBA Process Decision Framework 40
v
3.4 Composable Process Elements 463.4.1 Assessment Process Elements (APE) 463.4.2 Tailoring Process Element (TPE) 523.4.3 Glue Code Process Element 55
3.5 Generating Composable Process instances 583.5.1 Project Description 583.5.2 Process Description 593.5.3 Spiral Cycle 1 593.5.4 Spiral Cycle 2 613.5.5 Spiral Cycle 3 62
Chapter 4 COCOTS Risk Analyzer 654.1 Background 65
4.1.1 COCOTS model 674.1.2 Glue Code Sub-model in COCOTS 67
4.2 Modeling the COCOTS Risk Analyzer 684.2.1 Constructing the knowledge base 694.2.2 Identifying cost factor’s risk potential rating 724.2.3 Identifying project risk situations 744.2.4 Evaluating the probability of risk 744.2.5 Analyzing the severity of risk 754.2.6 Assessing overall project risk 774.2.7 Providing risk mitigation advices 77
4.3 Prototype of COCOTS Risk Analyzer 804.4 Evaluations 81
4.4.1 USC e-services projects 824.4.2 Industry projects 824.4.3 Discussion 83
Chapter 5 Optimizing CBD Process Decisions 855.1 Project Context 855.2 Risk based prioritization Strategy 875.3 Optimizing process decisions 88
5.3.1 Establishing COTS evaluation criteria 885.3.2 Scoping COTS assessment process: An example 895.3.3 Sequencing COTS integration activities 905.3.4 Prioritization of top features: Another example 93
5.4 Discussion 94
Chapter 6 Validation Results 956.1 Results from Fall 2004 95
6.1.1 Team Performance 966.1.2 Client Satisfaction 966.1.3 Other Results 97
6.2 Results from Fall 2005 99
vi
6.2.1 Pre-experiment survey results 1026.2.2 Experiment results 1036.2.3 Post-experiment survey results 106
6.3 Threats to Validity 107
Chapter 7 Contribution and Future Work 1097.1 Contributions 1097.2 Areas of Future Work 110
References 112
Appendix A Empirical analysis results 121
Appendix B Guidelines for developing assessment intensive CBAs 130
Appendix C COCOTS Risk Analyzer Delphi 157
Appendix D USC E-Services Project COTS Survey 164
vii
List of Tables
Table 1.1 COTS characteristics and impacts 3
Table 1.2 Composable Process Elements 8
Table 3.1 CBA Activity Sequence Examples 30
Table 3.2 Anticipated COTS activity patterns 32
Table 3.3 Unanticipated COTS Activity Patterns 33
Table 3.4 Mapping Win Win Spiral segments to CBA PDF 45
Table 3.5 Comparing CBA APE to ISO/IEC 14598-4 50
Table 3.6 Tailoring methods and common evaluation parameters 53
Table 3.7 OIV Spiral Cycle 1 – Alternative identification 59
Table 3.8 OIV Spiral Cycle 1 – Evaluation and elaboration 60
Table 3.9 OIV Spiral Cycle 2 – Alternative identification 61
Table 3.10 OIV Spiral Cycle 2 – Evaluation & Elaboration 62
Table 3.11 OIV Spiral Cycle 3 – Alternative identification 63
Table 3.12 OIV Spiral Cycle 3 – Evaluation & Elaboration 63
Table 4.1 COCOTS Glue Code sub-model cost drivers 68
Table 4.2 Delphi Responses for Size Mapping (Size: KSLOC) 73
Table 4.3 Mapping between cost factor rating to risk potential rating 73
Table 4.4 Assignment of Risk Probability Levels 75
Table 4.5 Risk mitigation advices 78
Table 4.6 Comparison of USC e-services and industry projects 81
Table 5.1 Steps of Risk Based Prioritization Strategy 87
viii
Table 6.1 Comparison of 2004 groups A and B on number of defects 96
Table 6.2 Comparison of 2004 groups A and B on client satisfaction 97
Table 6.3 Reported Benefits from the Usage of the Framework 98
Table 6.4 USC e-services non-CBA projects in Fall 2005 99
Table 6.5 Comparison of 2005 group A and B project profiles 103
Table 6.6 Evaluation of COCOTS Risk Analyzer 104
Table 6.7 Comparison of 2005 groups A and B on number of defects 105
Table 6.8 Comparison of 2005 groups A and B on client satisfaction 105
Table A.1 Data on CBA growth trend and CBA classification 121
Table A.2 Data on a1997 projects 121
Table A.3 Data on a1998 projects 122
Table A.4 Data on a1999 projects 123
Table A.5 Data on a2000 projects 124
Table A.6 Data on a2001 projects 125
Table A.7 Data on development effort (Year: 2001-2002) 126
Table A.8 Data on COTS Activity Sequences (Year: 2001-2002) 127
Table A.9 Data on COTS activity sequence 128
Table A.10 Data on COTS effort distribution in USC projects 128
Table B.1 Project Constraints Specification 138
Table B.2 Business Use Case Description 140
Table B.3 Prioritized System Capability Specification 142
Table B.4 Levels of Service Specification 142
Table B.5 Stakeholder Roles / Level of Service Concerns Relationship 143
ix
Table B.6 Stakeholders and responsibilities 147
Table B.7 Examples of COTS assessment instruments 148
Table B.8 Examples of facilities description 148
Table B.9 Example of list of COTSs 152
Table B.10 Example of evaluation criteria 152
Table B.11 Example of evaluation criteria breakdowns 153
Table B.12 Example of evaluation screen matrix 154
Table B.13 Example of conclusion-recommendation pairs 155
x
List of Figures
Figure 1.1 Required Approach for COTS-Based Systems [Albert 02] 4
Figure 2.1 A Waterfall-Based CBD Process 14
Figure 2.2 EPIC Process 16
Figure 2.3 The COCOTS Cost Estimation Model 21
Figure 3.1 CBA growth trend 27
Figure 3.2 Effort distribution of USC e-service projects 28
Figure 3.3 Effort distribution of COCOTS industry projects 28
Figure 3.4 Elaborated WinWin Spiral Model 39
Figure 3.5 CBA Process Decision Framework (PDF) 41
Figure 3.6 Assessment Process Element (APE) 46
Figure 3.7 Tailoring Process Element (TPE) 52
Figure 3.8 Glue Code Process Element (GPE) 56
Figure 4.1 COCOTS Risk Analyzer Workflow 69
Figure 4.2 Delphi results of risk situations 71
Figure 4.3 Productivity range of COCOTS cost factors 76
Figure 4.4 Risk quantification equation 77
Figure 4.5 Input interface 80
Figure 4.6 Output interface 81
Figure 4.7 Evaluation results of USC e-services projects 82
Figure 4.8 Evaluation results of Industry projects 83
Figure 5.1 Different risk patterns result in different activity sequences 92
xi
Figure 6.1 Comparison of COTS Impacts in two groups 98
Figure 6.2 Fall 2005 Experiment Setting 100
Figure 6.3 Experience with COTS activities 103
Figure 6.4 COTS activities performed in 2005 106
Figure 6.5 Activity improved in 2005 107
Figure B.1 Benefits Realization Approach Results Chain 135
Figure B.2 COTS assessment context diagram 137
Figure B.3 Elaborated Spiral Model for COTS Assessment Project 146
xii
Abstract
Research experience has suggested that software processes should be thought
of as a kind of software, which can be developed into composable component
pieces that can be executed to perform different software lifecycle activities.
General experience has indicated that the activities conducted while developing
COTS-based applications (CBA) differ greatly from those conducted in traditional
custom development. The primary research questions addressed in this dissertation
are (1) Can these activity differences be characterized and statistically analyzed?
(2) If so, can the primary CBA activity classes be organized into a decision
framework for projects developing CBA’s? The resulting research provides a
value-based composable set of processes for CBAs that includes an associated
Process Decision Framework (PDF), a set of Composable Process Elements
(CPEs), and a COCOTS Risk Analyzer.
A composable process implies the ability to construct a specific process from a
higher level and broader process framework and a set of reusable process
elements. The PDF is a recursive, re-entrant configuration structure, and
establishes the relationships among a mix of the CPEs and other process fragments
which are extended from the risk-driven WinWin Spiral model. The CPEs includes
Assessment, Tailoring, and Glue code development/integration, which are the
three primary sources of effort due to CBA development considerations, indicated
by empirical analysis on both large industry and small campus e-services CBA
xiii
projects. Each CPE is a defined, repeatable workflow. While the framework
provides a composition basis to support developers for navigating through the
option space in developing CBAs, the three process elements establish the basic
constituents for composing COTS processes based on common process patterns
identified in empirical studies. A technique named COCOTS Risk Analyzer, is
also developed and implemented to aid the optimization of process decisions via
risk based prioritization strategy. All together, the proposed solution supports
flexible composition of process elements with respect to evolving stakeholders’
value propositions, COTS market, and risk considerations.
To validate the value-based set of processes, experiments have been designed
and performed on student projects at USC graduate level software engineering
class in Fall 2004 and Fall 2005 semesters. The evaluation results show that
applying the value-based processes significantly improves the team performance.
xiv
Chapter 1 Introduction
1.1 Motivation
There is no Silver Bullet. [Brooks 86]
Economic imperatives are inexorably changing the nature of software
development processes. For software organizations under competitive pressure,
developing applications from Commercial-Off-The-Shelf (COTS) products has
recently become a widely adapted approach to reduce delivery time and reduce
development cost through reuse. Meanwhile, software processes are increasingly
moving away from conventional processes to compose pure-custom software from
lines of code, and toward processes for assessment, tailoring and integration of
COTS or other reusable components. The primary economic drivers of this change
are:
The increasing criticality of software capabilities to a product’s competitive
success;
The ability of COTS or other reusable components to significantly reduce a
product’s cost and development time;
An increasing number of COTS products available to provide needed user and
infrastructure capabilities.
However, it did not turn out in practice that COTS is a “silver bullet”. A great
deal of experience on the relative advantages and disadvantages of COTS solutions
1
have been summarized and reported [IBM 77, NTIS 80, NASA 80, ICP 80,
Auerbach 80, Datapro 80, Boehm 81, Garlan 95, Oberndorf 97, Vigder 98, Voas
98, Abts 01]. From a business perspective, the issues involve the short-term and
long-term costs, benefits, evolution and associated risks of using COTS products.
Moreover, practitioners are also finding that building systems using COTS products
requires new skills, knowledge, and abilities; changed roles and responsibilities;
and different processes [SAB 00]. This has led to a consensus that the development
and maintenance processes for COTS-based software systems are significantly
different from custom-built software systems, and have presented many challenges
to software developers. Some of these challenges are:
The marketplace is characterized by a vast array of products and product
claims;
Extreme quality and capability differences exist between products, and
There exist many product incompatibilities, even when they purport to adhere
to the same standards. [Oberndorf 97]
It is an increasing awareness that without a well thought out process, many
CBA projects will be unsuccessful in meeting their objectives. little chance that a
project will be completed. In order to more effectively use COTS products in
software development, a well-defined process methodology is required to facilitate
the exploitation of COTS advantages and the avoidance of development pitfalls.
The motivation behind this research is to make an advance toward organizing
software processes into a COTS Process Decision Framework, which can provide 2
effective process decision guidance with respect to project cost estimates as well as
risk assessment, to determine how one can realize the cost saving and schedule
reduction objectives expected of using COTS; and eventually, to help produce more
reliable, higher quality software systems successfully from COTS products.
1.2 Nature of the problem
There is a consensus that special characteristics of COTS software have a
pervasive impact on software lifecycle activities and requires software process
adjustments and accommodations [Fox 97, Carney 98, Brownsword 98, Vigder 98].
Table 1.1 examines both positive and negative aspects of COTS characteristics, the
impact to software lifecycle activities, and required accommodations.
Table 1.1 COTS characteristics and impacts
Characteristics Description Impacts
Posi
tive
Affordability COTS integration can lead toward savings. Cost analysis
Timeliness COTS packages are immediately available. Planning, Requirements
Tailorability COTS architecture is set for a general domain, and interaction mechanisms are provided for specific customization.
Design, Integration
Neg
ativ
e
Opacity No white box testing. Test
IncompatibilityCOTS product may not be compatible with exact functionality or other COTS.
Requirements, Integration, Cost analysis
Supportability COTS vendor may no longer exist or no longer provide support.
Integration, Maintenance
Uncontrollability The user has no influence on COTS evolution. Maintenance
Bot
h
Proliferation Large numbers of similar COTS are available. Requirements,Design
Dynamism COTS products, vendors, and marketplace are dynamically changing.
Whole life cycle
3
A software process has been defined as a coherent set of activities and
associated results for software production [Sommerville 00]. A number of general
software process models such as the Waterfall model [Royce 70] and the Spiral
model [Boehm 88] have been widely applied in traditional systems development.
However, processes based on these models have met considerable difficulties when
applied to CBD, due to the significant CBD impacts to software life cycle activities
as listed in Table 1.1 above.
Figure 1.1 Required Approach for COTS-Based Systems [Albert 02]
The traditional software processes present a common limitation: most assume
the system being built will be coded largely from scratch [Fox 97]. Not
surprisingly, this limitation has caused many considerable difficulties and lessons
learned within CBD, such as those reported in [Garlan 95, Braun 99, Albert 00, Fox
00, Lewis 01]. In such cases, the potential COTS benefits may be limited
significantly and most likely even turn into adverse effects to the system, which if
not appropriately handled, could lead a seemingly simple project into a disaster. In
4
the few successful cases reported, the use of COTS products often involves delay
and inefficiencies [Sledge 98], excessive integration effort more than expected
[Medvidovic 97, Swanson 97, Maxey 03], and significant amount of maintenance
effort [Balk 00].
Traditional sequential requirements-design-code-test (waterfall) processes do
not work for CBD [Benguria 02], simply because there is a critical relationship
among COTS product selection, requirement specification, and architecture design
in the front-end. The decision to use a COTS product constitutes acceptance of
many of the requirements that led to the product, and to its design and
implementation. This leads to a concurrent engineering of requirement,
architecture, and COTS selection process, as shown in Figure 1.1 [Albert 02].
Though some Bottom-up design and Object-Oriented (OO) design methods, to
some degree, facilitate the maximum use of proven COTS in analysis and design
phases by viewing the system as a set of components, the opacity, incompatibility,
and volatility of COTS products [Basili 01] often inevitably introduce a great deal
of backtrack and retry of early activities in a later phase. These result in high degree
of recursion and concurrency of the back-end CBD processes which should be
carefully planned and managed through more effective processes and risk
management techniques. Additionally, there are a number of new emerging,
nontypical activities requiring changes to conventional software processes, such as
COTS technology and product evaluation, adaptation, and integration
[Brownsword 98].
5
Meanwhile, as the fraction of CBD projects has increased among USC e-
services projects, the developers have encountered increasing conflict between
CBD process needs and the USC’s MBASE process and documentation guidelines
[Boehm 99]. This has led to a good deal of confusion, frustrating re-work, risky
decisions, and a few less than satisfactory products [Boehm 02]. This was
evidenced by observing numerous teams succumbing to effort allocation pitfalls.
For example performing too much COTS assessment and neglecting to allocate
enough time for training, tailoring and integration resulting in project delays,
inability to implement desired functional capabilities, and so forth. In some cases
some developers did not perform enough assessment, which resulted in problems
and delays during construction phases. Such losses may be minor when it comes to
implementing student projects as a part of course work, however in case of large
scale systems and systems requiring to meet a ‘specific window of opportunity’,
such losses may be extremely significant.
All these problems necessitate the need for new processes, methods, and
guidelines for CBD. Therefore, the principal research questions being addressed in
this study are:
In COTS-based application development, what are the key process
elements needed to accommodate COTS characteristics, and how can they
be organized into a decision framework to help strategic planning and
control with respect to evolving project situations?
6
1.3 Research Approach and Propositions
The research questions are answered through empirical studies [Basili 96,
Jeffery 99] that can aid in deriving a useful solution. These empirical studies have
led to a value-based set of processes for developing CBAs, which is the main
contribution of this work.
The value-based set of processes for CBAs includes five primary data-
motivated principles, an associated process framework, and a set of composable
process elements to support flexible composition of process elements with respect
to stakeholders’ evolving value propositions and risk considerations. In this work,
the word of “composable” is used to refer to the ability that a process component
can be composed to effectively support the gamut of different specialized tasks,
which corresponds to the suggested view in [Osterweil 87] that software processes
should be thought of as software to be developed, which can be composed from
component pieces consisting of different software lifecycle activities.
The three primary composable process elements (CPE’s), namely COTS
assessment, tailoring, and glue code development, have been developed based on
the previous model building experience of COCOTS [Abts 01] and from empirical
analysis on both large and small CBA project effort data. Table 1.2 below outlines
each process element with its targeted accommodations for COTS characteristics.
Each CPE is a defined, repeatable workflow for CBA developers to follow. The
Process Decision Framework (PDF) is a recursive, re-entrant configuration
7
structure, and establishes the relationships among a mix of the CPEs and other
process fragments which are extended from the risk-driven WinWin Spiral model.
While the framework provides a composition basis to support developers for
navigating through the option space in developing CBAs, the three process
elements establish the basic constituents for composing COTS processes based on
common process patterns identified in empirical studies. Further, a COCOTS Risk
analyzer is formulated and developed as a technique within the PDF to assess the
COTS integration risk and optimize process decision making by following a risk
based prioritization strategy in developing CBAs.
Table 1.2 Composable Process Elements Process Element Description (Adapted from [Abts 01]) Accommodated COTS
Characteristics
Assessment The activity whereby COTS products are evaluated and selected as viable components for a user application
AffordabilityTimelinessOpacityIncompatibilityProliferationDynamismSupportability
Tailoring The activity whereby COTS software products are configured for use in a specific context. This definition is similar to the SEI definition of “tailoring”
TailorabilityIncompatibility
Glue Code The activity whereby code is designed, developed, and used to ensure that COTS products satisfactorily interoperate in support of the user application
UncontrollabilityIncompatibility
Based on the research results in identifying and characterizing the primary CBA
activity classes and resulting PDF, the other main research hypothesis has been that
CBA development teams following the value-based processes would outperform
8
others using traditional processes. The specific metrics used to compare the
performance of these two groups are development effort, number of defects, team
project grade on quality of delivered artifacts, and client evaluation. In order to
validate the hypothesis, structured experiments were run at the University of
Southern California in 2004 and 2005 with the graduate-level software engineering
course CSCI577. In both experiments, the value-based CBA processes were
instructed at the beginning, and about 20 projects each year randomly selected to
follow either value-based CBA processes or traditional processes to complete their
development. The above metrics were collected and used to compare the team
performance.
1.4 Dissertation Outline
The organization of this dissertation is as follows:
Chapter 2 provides a survey of various related-work in the areas of addressing
process, costing, and risk management issues during CBD.
Chapter 3 presents the results from three empirical studies, and introduces the
definition of value-based Composable Processes, including a CBD Process
Decision Framework and its three Composable Process Elements, all derived from
the empirical findings. A case study project is covered to illustrate the application
of the Composable Processes.
Chapter 4 describes an approach named COCOTS Risk Analyzer to automate
the risk assessment during COTS-based integration in order to aid risk management
9
practices and process decision making. The evaluation results of this approach on
two datasets are also discussed.
Chapter 5 describes a risk-based prioritization strategy for optimizing process
decisions when applying the composable processes with the support of the
COCOTS Risk Analyzer.
Chapter 6 describes the validation experiments and results.
Finally, Chapter 7 summarizes the contributions of this thesis study and
proposes several areas for future research.
10
Chapter 2 A Survey of Related Work
The work presented in this dissertation draws upon research in a number of
areas including software processes, cost estimations, and risk management for
COTS-based development. In this section, the related work in these areas will be
discussed respectively.
2.1 Processes for COTS-Based Development
As a considerable amount of research has been dedicated to facilitate COTS
reuse, a number of researchers have identified the existence of different types of
COTS based systems, and have affirmed that the process and activities involved to
be followed in developing each type of system largely differs [Carney 97, Wallnau
98, Dean 00]. Besides some established, well-accepted processes, such as the
general Waterfall model and Spiral model, Rational Unified Process (RUP)
[Jacobson etc. 99], Model-Based Architecting and Software Engineering (MBASE)
[Boehm 01], there are several new proposed CBS development processes such as
described in [Kontio 96, Carney 97, Fox 97, Braun 99, Vigder 98, Morisio 00,
Brownsword 00, Albert 02]. This section will provide an overview of these.
2.1.1 COTS Selection ProcessThe OTSO (Off-The-Shelf Option) Method [Kontio 96] provides specific
techniques for defining evaluation criteria, comparing the costs and benefits of
alternative products, and consolidating the evaluation results for decision-making.
The definition of hierarchical evaluation criteria is the core task in this method. It
11
defines four different subprocesses: search criteria, definition of the baseline,
detailed evaluation criteria, weighting of criteria. Even though OTSO realizes that
the key problem in COTS selection is the lack of attention to requirements, the
method does not provide or suggest any effective solution. The method assumes
that requirements already exist since it uses a requirements specification for
interpreting requirements
Another work presented in [Kunda 99] describes the STACE (Social-Technical
Approach to COTS Evaluation) Framework, an approach that emphasizes social
and organizational issues to COTS selection process. The main limitation of this
approach is the lack of a well-defined process of requirements
acquisition/modeling. Moreover, the STACE does not provide a systematic analysis
of the evaluated products using a decision-making technique.
The PORE (Procurement-Oriented Requirements Engineering) Method
[Maiden 99] is a template-based approach to support requirements acquisition. The
method uses an iterative process of requirements acquisition and product
evaluation. It identifies four goals in a thorough COTS selection process: (a)
acquiring information from the stakeholders, (b) analyzing the information to
determine if it is complete and correct, (c) making the decision about product
requirement compliance if the acquired information is sufficient, and (d) selecting
one or more candidate COTS components. Although the PORE method includes
some requirements acquisition techniques, it is not clear how requirements are used
in the evaluation process and how products are eliminated.
12
The CARE (COTS-Aware Requirements Engineering) approach [Chung 04] is
a mixed OO, goal-oriented, and agent-oriented requirements engineering process
model including activities of defining baselined agents, goals, system requirements,
software requirements, and architecture. It then searches through a maintained
COTS product repository for appropriate COTS components that match the defined
goals. However, the approach lacks support for stakeholder negotiation and process
concurrency, does not address issues related to COTS incompatibility, risk
management, cost-benefit analysis aspects during COTS selection process.
A number of other recent COTS selection processes such as those in [ISO 99,
Lawlis 01, Dorda 03] identify key tasks and artifacts, but fail to address how the
selection process fits into a lifecycle process and overlook the concurrency and
high coupling among COTS assessment, tailoring, glue code development, and
other typical engineering activities. Other related work includes the COTS
specification template [Dong 99] and the comprehensive reuse characterization
scheme [Basili 91] have been suggested to describe COTS software components in
uniform ways to support product selection and evaluation.
2.1.2 Waterfall-Based Lifecycle ProcessesIn general, the CBD lifecycle consists of the following phases: identification,
evaluation, selection, integration and update of components [Obendorf 97]. Some
recently proposed CBD process models attempt to address COTS related issues by
adding CBA extensions to a sequential waterfall-based process [Morisio 00] (as
shown in Figure 2.1). These work in some situations, but not in others where the
13
requirements, architecture, and COTS choices evolve concurrently. This will
present serious problems for requirements-first CBA processes. It is better than a
pure waterfall in performing concurrent COTS package evaluation, selection, and
requirements analysis. But committing to requirements on this basis before
performing design and glueware integration analysis is likely to run into the kind of
architectural mismatch problems causing factor-of-4 schedule overruns and factor-
of-5 budget overruns as discussed in [Garlan 95].
Figure 2.2 A Waterfall-Based CBD Process
This kind of problem will confront organizations or countries whose official
acquisition processes are requirements-first waterfall models. It can even be a
problem in agile methods such as Extreme Programming that espouse Simple
Design and Refactoring for accommodating requirements change. If your early
agile iterations for initial users lock you into a COTS product with limited growth
14
potential, no amount of refactoring will get the COTS product to (for example)
double its throughput when this is needed for later iterations.
In such cases, the best one can do is to reinterpret the process; invest in enough
concurrent engineering of overall requirements and COTS-based design to ensure a
feasible combination before setting user expectations; and have much later System
Requirements review that includes a viable system design and a COTS integration
feasibility demonstration.
2.1.3 Spiral-Based Lifecycle Processes
Since waterfall-based process models could cause expensive premature-
commitment problems as discussed above, process frameworks such as the spiral
model [Boehm 88], the Infrastructure Incremental Development Approach (IIDA)
[Fox 97], and the SEI Evolutionary Process for Integrating COTS-Based Systems
(EPIC) process [Albert 02] may be adapted to provide suitably flexible and
concurrent frameworks for addressing these issues. For example, EPIC uses a risk-
based spiral development process to keep the requirements and architecture fluid as
the four spheres of influence are considered and adjusted to optimize the use of
available components. Iterations systematically reduce the trade space, grow the
knowledge of the solution, and increase stakeholder buy-in. At the same time, each
iteration, or spiral, is planned to mitigate specific risks in the project.
However, they have not, to date, provided a specific decision framework for
navigating through the option space in developing CBA’s as illustrated in Figure
15
2.2 (a) and (b). It is good in identifying key activities to address the important
COTS considerations, e.g. evaluate alternatives; identify and resolve risks;
accumulate specific kinds of knowledge; increase stakeholder buy-in; make
incremental decisions that shrink the trade space. But one still needs a process
model with enough planning content to enable projects to monitor project’s
progress toward completion. And its lack of intermediate milestones leaves it open
to a number of other major problem sources.
(a) EPIC Process Objectives
(b) EPIC Phases
16
Figure 2.3 EPIC ProcessOne problem source is the lack of guidance of what steps to take next, or for
how long to perform them. Another is the lack of status information for
communicating and controlling progress toward completion. A third problem is the
likelihood of nonconvergence, especially if risk assessment if poorly done, as in the
“study-wait-study” syndrome next.
A good example of the study-wait-study syndrome was that of an organization
wishing to configure a new corporate software development support environment
composed largely of COTS or open-source products supporting requirements,
design, code, integration, test, CM, QA, planning and control, estimation, etc. They
did a thorough job of identifying stakeholder needs, analyzing architecture options,
marketplace analysis, and risk assessment. They converged to some extent by
rejecting unacceptable options. But for three years in a row, their marketplace
analysis indicated that better capabilities were on the horizon, and they opted to
wait and study the options again for the following years. Finally, they realized that
they were losing productivity gains by waiting and developed a milestone plan to
pilot and adopt the best-available option. The plan included downstream growth
potential as an evaluation criterion, and a rebaselining step based on experience
with the pilot project, and enabled them to get out of the study-wait-study cycle.
2.1.4 Object-Oriented Process ModelsHere discusses some OO approaches that offer specific technical solutions that
are applicable to component-based software engineering, some of which will be
integrated in the thesis work proposed later.17
An established, well-accepted OO process is the Rational Unified Process
(RUP) [Jacobson etc. 99]. It is based on four phases (inception, elaboration,
construction, and transition) and a collection of core process disciplines including
business modeling, requirements, analysis and design, implementation, test,
deployment, etc. Although RUP is not a COTS-oriented process, it does support the
use of component based architecture. In additional to supporting the traditional
approach in which a system is built from scratch, RUP also supports building a
system with the intent of developing reusable, in-house components, and an
assembly approach of building software from COTS components. The RUP model
supports a component approach in a number of ways. The iterative approach allows
developers to progressively identify components and choose the ones to develop,
reuse, or buy. The structure of the software architecture describes the components,
their interactions, how they integrate. Packages, subsystems, and layers are used
during analysis and design to organize components and specify interfaces. Single
components can be tested first followed by larger sets of integrated components.
The other OO approach that is developed at USC Center for Software
Engineering is the Model Based (system) Architecting and Software Engineering
(MBASE). MBASE is extended from Spiral model and offers a set of integrated
models that consider product, success, process, and property models of the system
under development to avoid model clashes, and a set of three life-cycle anchor
points including Life Cycle Objectives (LCO), Life Cycle Architecture, and Initial
Operational Capability (IOC) [Boehm 01]. The LCO, LCA, and IOC have become
18
the key milestones of the RUP. It provides an approach for negotiating
requirements, capturing operational concepts, building initial design models,
assessing project risks, and planning the life cycle. This approach has been widely
adopted in the development of USC campus wide e-services projects.
However, it makes these general OO approaches very difficult to apply when
serious component incompatibility problem emerged during development, esp.
when it comes to COTS integration. Software architects and product developers are
generally familiar with the concept that trying to integrate two or more arbitrarily
selected products or product models can lead to serious conflicts and disasters.
Some examples are mixing functional and object-oriented COTS components, or
the architectural style mismatches as detailed in [Garlen 95].
Some technical COTS integration approaches have been proposed to address
the problem. For example, a C2 architecture suitable for COTS integration is
proposed in [Medvidovic 97]. C2 is a component and message based architecture
style. A C2 architecture is a hierarchical network of concurrent components linked
together by connectors (message routing devices) in accordance within a set of
style rules. It allows the use of heterogeneous components with their internal
architecture. It has asynchronous message passing and makes no assumption on
shared address space, or a single thread of control. Research has been conducted on
using different COTS middleware in C2 architecture [Deshofy 99]. In addition,
identification and different classifications of architectural mismatches between
19
software component were studied in [Abd-Allah 96], [Gacek 97], and [Yakimovich
99].
Besides the architecture approaches and techniques discussed above, there is a
high demand on a life-cycle process to identify the problems, to select, plan, and
coordinate effective approaches and techniques to be applied in CBD.
2.2 Costing CBD
In order to achieve potential benefits of cost and schedule reduction, costing is
an important part of COTS-based system development to quantify the cost savings
expected by the use of COTS. Cost analysis activities are intricately related with
many development process activities [Erdogmus 00], therefore, an effective CBD
process has to take into consideration and provide solutions to this. This section
will outline the current approaches to CBD cost estimation. One well-known
approach is industry-calibrated COstructive COTS cost estimation model
(COCOTS) [Abts 01] as presented in Figure 2.3.
2.2.1 COCOTSCOCOTS is a member of the USC COCOMO II family of cost estimation
models [1]. It is developed for estimating the expected initial cost of integrating
COTS software into a new software system development or system refresh,
currently focusing on three major sources of integration costs: 1) COTS product
assessment, 2) COTS product tailoring, and 3) Integration or “glue code”
development. COCOTS effort estimates are made by composing individual
assessment, tailoring, and glue-code effort sub-models as illustrated in Figure 2.3. 20
Figure 2.4 The COCOTS Cost Estimation Model
The COCOTS model has been largely adopted by some commercial cost
estimation tools, e.g. PRICE-S [Minkiewicz 04]. It has also been found very
helpful in cost estimating and resource planning in USC e-service CBA
development projects [Boehm 03c].
While COCOTS model was developed with the intention to determine the
economic feasibility of COTS-based solutions, its model structure consists of most,
if not all, of the important project characteristics that should be carefully examined
when assessing CBD project risk. One such example is the risk assessment method
in [Port 04b] using the set of COTS assessment attributes defined in COCOTS
Assessment sub-model to help determining the “sweet spot” amount of COTS
assessment effort to balance project’s “too many errors” risk as well as “late
delivery” risk.
21
2.2.2 OthersOther than the COCOTS model, a number of other publications report different
methods to assess CBD effort such as the Net Present Value (NPV) and Real
Option approaches in [Erdogmus 99]. The NPV analysis concentrates on the impact
of product risk and development time on the defined metric. The Real Option
valuation primarily deals with uncertainty. However, these two approaches are too
theoretical to apply in practice. Furthermore, [Erdogmus 00] reports a rudimentary
model that relies on a set of rules of thumb to estimate effort and schedule for a
given development activity. The rules of thumb are based on the COTS product
types to be used in the system. [Brooks 04] proposes an example of a parametric
model using historical data from a Northrop Grumman Mission Systems COTS-
intensive development.
Though these estimation techniques provide seemingly satisfactory results (i.e.
62% of COCOTS glue code model estimation accuracy), and could be further
calibrated to improve the model accuracy once more industry data is available, they
do not provide guidance for CBD and therefore are no more help than offering a
numerical value for developers. The value may turn out to be useless in some cases
where developers follow a poor process and select wrong COTS products, which
could cause tremendous rework.
2.3 Risks in CBD
In contrast to the perception of most people, CBD is not a low risk development
strategy which provides a simple and rapid mechanism for increasing the
22
functionality and capability of a system. Many of the problems in CBD are a
consequence of poor appreciation of the risks involved and their management
[Rashid 01]. The author also analyzes the factors that cause these risks, including:
The blackbox nature of commercial off-the-shelf (COTS) software.
The quality of COTS software.
The lack of component interoperability standards.
The disparity in the customer-vendor evolution cycles.
The author also proposes a categorization of CBD risks on the basis of six key
application development activities (component evaluation, system integration,
development process, application context, system quality and system evolution)
and proposes a risk management mechanism based on identifying risk
minimization strategies for each category individually.
Another risk evaluation approach developed at SEI [Carney 03] is the COTS
Usage Risk Evaluation (CURE). CURE is a “front end” analysis tool that predicts
the areas where the use of COTS products will have the greatest impact on the
program, and designed to find risks relating to the use of COTS products and report
those risks back to the organization that is evaluated. 23The CURE evaluation
consists of four activities:
preliminary data gathering
on-site interview
analysis of data
23
presentation of results
CURE has been performed on the Department of Defense, other government
agencies, and industrial organizations. Since it is currently defined as a standalone
technique that has to be performed by external trained CURE evaluators, this
makes it difficult to be adopted by small/medium software organizations to
effectively identifying and managing their risks.
2.4 Conclusions
Although the research achievements discussed above provide help and insights
in COTS selection processes, CBD lifecycle processes, estimating COTS
integration cost, and managing CBD risks, it is discussed that the processes are
either overly-sequential or non-specific, which limits their wide application in real
CBD projects. Furthermore, there is a lack of integrated framework which connects
the process, costing, and risk issues and provide process guidance for strategic
planning and control within the CBD lifecycle. This is the main objective of our
study which will be discussed next.
24
Chapter 3 Composable Processes for Developing CBAs
This section begins with the summaries of previous empirical studies, from
which the proposed solution is derived. Section 3.2 presents the five principles for
CBA development. Section 3.3 introduces the definition of CBA Process Decision
Framework (PDF). Elaborations on the three composable COTS-related process
elements (CPE) are presented in Section 3.4. Section 3.5 demonstrates how to use
CBA PDF and CPEs to generate process instances through an example project.
3.1 Empirical Study Indications
3.1.1 BackgroundThe data used in our empirical studies is from student software development
projects within the USC-CSE CS577 course from 1997 through 2004. One of the
major features and objectives of this class is that all the students are provided with
real projects, organized into (typically) 6-person-teams, collaborating with real
clients, dealing with a limited budget and schedule, and thus encountering realistic
development challenges. Although there are some significant differences between
large and small CBD projects, it has been found that they share many real-world
factors such as client demand for sophisticated functionality, fixed time schedules,
limited developer resources and skills, lack of client maintenance resources, and
many others. Therefore, the empirical studies presented next are built upon the
belief that and provide some data to substantiate that COTS experiences from small
USC e-services projects are particularly representative of CBD in general.
25
3.1.2 Quantitative CBA Growth TrendAs the SEI defines a COTS-Based System (CBS) very generally as “any
system, which includes one or more COTS products.” [Brownsword 00], this
includes most current systems, including many which treat a COTS operating
system and other utilities as a relatively stable platform on which to build
applications. Such systems can be considered “COTS-based systems,” as most of
their executing instructions come from COTS products, but COTS considerations
do not affect the development process very much.
To provide a focus on the types of applications for which COTS considerations
do affect the development process, a less inclusive terminology of COTS-Based
Application (CBA) is defined as:
A system for which at least 30% of the end-user functionality (in terms of
functional elements: inputs, outputs, queries, external interfaces, internal files) is
provided by COTS products, and at least 10 % of the development effort is devoted
to COTS considerations.
The numbers 30% and 10% are not sacred quantities, but approximate
behavioral CBA boundaries observed in CBD projects. There has been a
significant gap observed in our COTS-related project effort statistics. The projects
observed either reported less than 2% or over 10% COTS-related effort, but never
between 2 and 10% [Boehm 03]. Projects with low COTS-related effort often had
considerable COTS infrastructure but relatively little COTS end-user functionality.
26
An increasing fraction of CBA projects have been observed in over six years of
USC e-services project data. More specifically, the CBA fraction has increased
from 28% in 1997 to 80% in 2004. Similar results (54% in 2000) were found in the
Standish Group’s 2000 survey [Chaos 00]. Meanwhile, it has been experienced that
many notable effects are emerging along with this increase: for example, that
staffing and education for CBA software engineering requires not only
programming skills but also the system engineering skills of cost-benefit analysis,
tradeoff analysis, and risk analysis, this situation provides a motivation to develop
and evolve a CBA-oriented process framework to provide guidance for CBA
developers. More specifics on this were reported in [Boehm 03].
Figure 3.5 CBA growth trend (* Standish Group CHAOS 2000)
3.1.3 COTS-Related Effort DistributionBased on an analysis of the 2000-2002 USC e-services data and 1996-2001
COCOTS calibration data, similar distributions of the CBA effort for the campus e-
services applications and large industry-government CBA’s is observed. These
effort distributions exhibit large variation of COTS-related (assessment, tailoring,
and glue code) effort. This is clearly illustrated in Figures 3.2 and 3.3 respectively.
CBA Growth Trend in USC e-Services Projects
010
2030
4050
6070
80
1997 1998 1999 2000 2001 2002
Year
Perc
enta
ge *
27
The small USC e-service projects in Figure 3.2, as discussed early, are all 5-
person team, and under zero or very little budget constraint and 24-week schedule
constraint. Assessment effort ranged from 4 to 383 hours. Tailoring effort ranged
from 0 to 407 hours. Glue code effort ranged from 0 to 103 hours.
0%
20%
40%
60%
80%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Assessment Tailoring Glue Code
Figure 3.6 Effort distribution of USC e-service projects
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Figure 3.7 Effort distribution of COCOTS industry projects
The industry projects in Figure 3.3 were a mix of some small but mostly large
business management, engineering analysis, and command control applications.
Assessment effort ranged from 1.25 to 147.5 person-months (PM). Tailoring effort
28
ranged from 3 to 648 PM; glue code effort ranged from 1 to 1411 PM. They had an
average of 13% additional effort in adapting to new releases.
These effort distributions exhibit large variation of COTS-related (assessment,
tailoring, and glue code) effort. They show that there is no one-size-fits-all effort
distribution for CBAs. More specifics can be found in [Boehm 03].
There were some differences between the small e-services projects and the
industry projects. More industry projects have very large amounts of glue code; the
e-services projects’ fixed schedle made such projects very risky to attempt. On the
other hand, the e-services projects were more frequently assessment-intensive,
primarily to minimize the risk of missing an inflexible deadline.
3.1.4 Relating COTS Activity Sequences to RisksAnother empirical analysis showed different CBA projects has a distinct pattern
of the sequences in which various COTS-related activities were implemented.
When observing and analyzing various e-services projects, the empirical study
concluded that all projects do not follow similar activity sequences and that many
had to backtrack to previous stages in the process. To facilitate the investigation
about COTS-related issues, the developers were required to submit both the
individual effort report and project progress report on a weekly basis, in which they
stated what COTS-related activity (assessment, tailoring, or glue-code) has been
planned, performed, finished, or cancelled, what COTS-related changes have been
introduced and agreed on, what COTS issues need to be sorted out through further
study or stakeholders’ renegotiation and new commitments.
29
Based on the weekly progress reports, the COTS activity sequences of all the
projects are reconstituted. Table 3.1 shows a sampling of the A (assessment), T
(tailoring), G (glue code development and integration), C (custom development)
sequences for some of the CBA projects in each of their Inception, Elaboration,
Construction and Transition phases. The sequences of activities are time-ordered
from left to right; the parentheses indicate activities in parallel. The different
combinations of assessment, tailoring, and glue code activities resulted from
insufficient earlier assessment (for example, projects 3, 5), COTS changes (projects
1, 2, 4), or requirement changes (project 9). Such decision factors are not directly
addressed by the current literature on CBS processes. Although it is an interesting
coincidence that these are the same letters used by molecular biologists to form
their genetic codes, the concept may have merit in this application due to the
importance of AGCT patterns.
Table 3.3 CBA Activity Sequence Examples
Project IDActivity Sequences
Inception Elaboration Construction Transition
1 A AC ATG C
2 A AT A A
3 A (TG)A G G
4 A A(TG) A(TG) G
5 AT AT T T
6 A T TG G
7 AT T T T
8 AT (AA) TG (TGC) G
9 A AT TG G
30
Our empirical analysis shows each of these CBA projects has a distinct pattern
of the sequences in which various activities were implemented. Such differences in
the activity sequences resulted from a specific set of project characteristics and
associated risks reported by the projects. Analyzing the CBA project sequence data
showed that there are usually certain patterns that commonly exist in almost all of
the CBA projects, and also there are some patterns that indicated the most
significant project risk if not being properly handled leading to significant loss in
terms of confusion, rework, and delay in schedule [Port 04].
With respect to the CBA development process, it is observed that some notable
patterns were expected (and indeed appeared) and some others were surprises. Such
patterns are interesting in their own right, however they have primarily been used to
derive, validate and refine the CBA decision framework which will be introduced
in Section 3.2. The observed activity patterns fall into the following two categories:
Anticipated Patterns : Every anticipated pattern addresses certain identified
COTS related risks, to avoid such risks where possible, and finally to
control or transfer the remaining risks. Table 3.2 lists the five identified
anticipated activity patterns, with the avoided risk shown in column 4 and
probability of occurrence in the 9 observed projects shown in the last
column:
Table 3.4 Anticipated COTS activity patternsNo. Name Description Avoided Risks Prob.
31
AP-1 Assessment first
After identifying OC&P’s and collecting an initial set of COTS candidates, COTS assessment is usually performed.
Selecting faulty COTS candidate;
Faulty vendor claim
100%
AP-2 Assessment to tailoring (A=>T).
While assessment is on going or once assessment is done, it becomes clearer what can be customized in a COTS product before being utilized.
COTS is developed for general purpose, not for any particular system
100%
AP-3 Tailoring to glue code (T =>G).
When integrating COTS packages, often tailoring can help prepare unrelated COTS packages to “fit” together with glue-code.
Integrated system did not perform as expected.
33%
AP-4 Assessment to tailoring and glue code (A=>(TG) or A =>T=>G).
This is particularly true with multiple COTS components that require a thorough assessment with tailoring and glue code development to test COTS usability and interoperability.
Insufficient early assessment without prototyping by tailoring and glue coding
67%
AP-5 After Inception, A, T, TG as a repeatable pair (A=>A or T=>T or (TG)=>(TG)).
Due to frequent requirement changes or new COTS insights, re-assessing or retailoring a COTS package is common in addition to possibly a certain amount of rework on glue code to accommodate the changes.
Assuming that initial COTS will satisfy the new requirements.
67%
Unanticipated Patterns : An unanticipated pattern is a pattern that is not
expected to typically occur (and as a result, not often planned for) in a
project. Yet it has project rationale, and is observed within the case-study
projects. Table 3.3 lists the two identified unanticipated activity patterns.
COTS activity sequences provide an effective means analyzing complex and
often subtle COTS processes. In particular, they have proven invaluable for
validating and refining the CBA decision framework that will be shown to be of
substantial value to developers inexperienced in COTS based system development.
Moreover, the identification of CBA sequence-risk patterns provides a means to
identify, avoid COTS risks, and aid in strategic project planning.
32
Table 3.5 Unanticipated COTS Activity PatternsNo Name Description Indicated Risk Prob.
AP-6 Tailoring to assessment (TA)
When tailoring COTS, need to re-assess selected COTS due to project changes (e.g. Reqt’s, COTS, and priority changes)
1) Requirement changes make initial COTS no longer satisfying;
2) COTS version upgrade during development demands re-evaluation;
3) Tailored package didn’t perform as expected.
33%
AP-7 Glue code to assessment (GA)
Integration difficulty causes re-assessing COTS.
1), 2) above and
3) Integrated system did not perform as expected.
4) Lack of interoperability standards to facilitate the integration
20%
3.2 Five principles for CBA development
Based on our empirical analysis on over 50 USC e-services CBA projects, five
principles for CBA development are concluded as follows:
1. process happens where the effort happens;
2. don’t start with requirements;
3. avoid premature commitments, but have and use a plan;
4. buy information early to reduce risk and rework; and
5. prepare for COTS change.
3.2.1 Process happens where the effort happensSimilar effort distributions tended to produce similar process sequences. This
has been illustrated in Figure 3.2 and Figure 3.3 through the CBA effort
33
distributions for USC campus e-services applications and large industry-
government CBAs.
It also proves that no one-size-fits-all effort distribution or development process
exists for CBAs. Some projects, particularly in the small e-services area, focused
almost exclusively on assessment, with little tailoring or glue code required once
the COTS products were selected. Others, generally applications supported by
large, single COTS Enterprise Resource Planning or Web portal packages, required
primarily tailoring. Some CBAs needed extensive glue code development and
integration because the selected best-of-breed COTS packages weren’t designed to
operate with each other. And some projects demanded a great deal effort in all three
areas. But the general similarity of diverse CBA project distributions implies the
need for using a form of Integrated Product and Process Development approach as
in the Capability Maturity Model Integrated (CMMI) guidelines [Chrisis 03].
3.2.2 Don’t start with requirementsThe Merriam-Webster Dictionary defines a requirement as “something claimed
or asked for by right and authority.” Once the developers, customers, and users
agree to call something a requirement, many customers and users will see the
requirement as their own. This puts the developer on a fixed-price project or the
customer on a cost-plus project in the untenable position of managing
unmanageable expectations or overconstrained COTS solution options.
Fundamentally, something isn’t a requirement if you can’t afford it.
34
Committing to requirements before performing design and glueware integration
analysis will likely create architectural mismatch problems causing factor-of-four
schedule overruns and factor-of-five budget overruns. In such cases, the best you
can do is reinterpret the process, invest in enough concurrent engineering of overall
requirements and COTS-based design to ensure a feasible combination before
setting user expectations, and have a much later system requirements review that
includes a viable system design and a COTS integration feasibility demonstration.
Risk analysis (Principle 4) is key here.
3.2.3 Avoid premature commitments, but have and use a planSo you follow Principle 2, but you still need a process model with enough
planning content to let you monitor the project’s progress toward completion. EPIC
offers an example of an underdetermined CBA process model. It can identify and
provide insights on important COTS considerations, but its lack of intermediate
milestones leaves it open to at least three major problems:
It lacks guidance for next steps, or how long to perform them.
It generates little status information for communicating and controlling
progress toward completion.
It increases the likelihood of nonconvergence, as in the “study-wait-
study” cycle of performing a COTS evaluation study, noting that better
new COTS capabilities were on the horizon, and waiting to perform a
new study that concludes the same thing and repeats the loop.
35
3.2.4 Buy information early to reduce risk and reworkProject teams often scope the COTS assessment and selection process based on
the COTS products’ likely cost. A better scope criterion is the amount of risk
exposure (probability of loss × value of loss) involved in choosing the wrong
combination. For example, a supply chain project team scoped its assessment based
on the COTS products’ cost of US$200,000, but the loss from picking an
incompatible combination of best-of-breed COTS products ran about $3 million
plus eight months of delay. Had they invested more than $200,000 in COTS
assessment to include interoperability prototyping, they would have been able to
switch to a more compatible COTS combination with only minor losses in
effectiveness and without expensive, late rework.
The risk-driven spiral model offers a good process framework for addressing
“how much COTS assessment is enough” issues. One good example balances the
risk of erroneous choices from insufficient COTS assessment with the risk of delay
from excessive COTS assessment to find a risk-driven “sweet spot” for the amount
of COTS assessment effort. However, the general spiral model resembles the EPIC
model in not providing COTS-specific process milestones and decision points. We
provide a COTS-specialized process framework and process elements here.
3.2.5 Prepare for COTS changeAn annual survey of COTS change characteristics conducted at the aerospace-
related Ground System Architectures Workshops reports that vendors upgrade their
COTS products every 10 months, on average. The surveys also show that releases
go unsupported after an average of three releases. Thus, if you attempt to stabilize 36
on COTS release 3.0, you will typically find it going unsupported with release 6.0
about 30 months later. Or, if you outsource the development of a large application
that takes more than 30 months, you may find that it’s delivered with unsupported
COTS releases. One large Cocots calibration project had 120 COTS products
delivered, 55 of which (46 percent) were no longer vendor-supported.
The challenge of vendor-controlled COTS change becomes even greater during
software maintenance. The more COTS products an application has, the greater the
challenge will be; the application with 120 COTS products would have an average
of 12 new releases per month. If the COTS products are tightly coupled, the
maintenance effort can scale as high as the square of the number of products [Basili
01, Mehta 00]. Furthermore, as COTS vendors try for competitive differentiation,
they are likely to introduce new features that are incompatible with those of other
COTS products.
These challenges don’t mean application developers have no way to cope with
COTS change. Useful strategies include:
investing effort in a proactive COTS market-watch activity;
developing win-win rather than adversarial relations with COTS
vendors;
reducing the number of COTS products and vendors;
reducing inter-COTS coupling via wrappers, mediators, or
interoperability standards;
37
contracting for delivery of latest-release COTS products; and
developing and evolving a lifecycle COTS refresh strategy that’s
synchronized with business cycles (such as annual product models or
18-month operator retraining cycles), using tailored versions of the
process framework discussed next.
Further discussion of CBA maintenance lessons appears elsewhere [Reifer 04].
3.3 CBA Process Decision Framework
3.3.1 Win Win Spiral Model and its LimitationFigure 3.4 provides an elaborated version of the Win Win Spiral Model than
that presented in [Boehm 88]. It returns to the original four segments of the spiral,
and adds stakeholders’ win-win elements in appropriate places. It also emphasizes
concurrent product and process development, verification and validation; adds
priorities to stakeholders’ identification of objectives and constraints; and includes
the LCO, LCA, and IOC anchor point milestones [Boehm 96] also adopted by the
Rational Unified Process.
38
Figure 3.8 Elaborated WinWin Spiral Model
The risk-driven Win Win Spiral model is a good process framework for
addressing “how much COTS assessment is enough” issues. However, the general
spiral model is similar to the EPIC [Albert 02] model in not providing COTS-
specific process milestones and decision points.
In analyzing both USC e-services and industry CBA projects, it is found that
the ways that the better projects handled their individual assessment, tailoring, and
glue code activities exhibited considerable similarities at the process element level
as discussed in section 3.1. It is also found that these process elements fit into a
recursive and reentrant decision framework accommodating concurrent CBA
activities and frequent go-backs based on new and evolving client needs and risk
considerations. Therefore, this thesis work focuses on putting forward a reentrant,
39
recursive CBA process decision framework and three composable process elements
as a COTS-specialized risk-driven Win Win Spiral framework. The CBA process
decision framework will be provided next. And its three composable process
elements are defined in Section 3.3.
This Chapter will demonstrate how to use the proposed solution and Win Win
Spiral Model to generate composable CBA processes through an example CBA e-
services project, “Oversized Image Viewer” (OIV), in its first three Spiral cycles.
Some important project information will be briefly introduced in Section 3.2.2;
while the details of the application case study will be presented in Section 3.3.
3.3.2 CBA Process Decision FrameworkFigure 3.5 illustrates the overall CBA decision framework that composes the
assessment, tailoring, glue code, and custom code development process elements
within an overall development lifecycle. It presents the dominant decisions and
activities within CBA development as abstracted from our observations and
analysis of USC e-services and CSE- affiliate projects.
The CBA process is undertaken by “walking” a path from “start” to “Non-CBA
Activities” that connects (via arrows) activities as indicated by boxes and decisions
that are indicated by ovals. Activities result in information that is passed on as input
to either another activity or used to make a decision. Information follows the path
that best describes the activity or decision output. Only one labeled path may be
taken at any given time for any particular walk; however it is possible to perform
40
multiple activities simultaneously (e.g. developing custom application code and
glue code, multiple developers assessing or tailoring).
Figure 3.9 CBA Process Decision Framework (PDF)
The small circles with letters A, T, G, C indicate the assessment, tailoring, glue
code, and custom code development process elements respectively. With the
exception of the latter, each of these process elements will be expanded and
elaborated in section 3.3. The less obvious aspects of each process area will be
summarized next.
P1: Identify stakeholders’ desired objectives, constraints, and priorities
(OC&P’s). The CBA process framework begins by having the systems
stakeholders identify and prioritize their value propositions (about features,
platforms, performance, budgets, schedules, etc.) and negotiate a mutually
P1: Identify Objective, Constraints and
Priorities (OC&Ps)
P2: Do Relevant COTS Products Exist?
P3: Assess COTS Candidates
P4: Tailoring Required?
Single Full-COTS solution satisfies all OC&Ps
Yes or Unsure
P6: Can adjust OC&Ps?No
No acceptable COTS-Based Solution
P5: Multiple COTS cover all OC&Ps?Partial COTS solution best
P7: Custom Development
NoYes
P10: Develop Glue Code
P8: Coordinate custom code and glue
code development
P9: Develop Custom Code
No, Custom codeRequired to satisfy
all OC&PsYes
P11: Tailor COTS P12: Productize, Test and Transition
NoYes
Deploy
Deploy
A
TNo
G
Start
C
C
Task/Process
Decision/Review
AssessmentA
TailoringT
Glue-CodeG
Custom DevelopmentC
41
statisfactory or win-win set of objectives, constraints, and priorities [Boehm 03b].
As the project goes on, risk considerations, stakeholders’ priority changes, new
COTS releases and other dynamic considerations may alter the OC&P’s. In
particular, if no suitable COTS packages are identified (P6), the stakeholders may
change the OC&P’s and the process is started over with these new considerations.
For the OIV project, the original client was a USC librarian whose
collections included access to some recently-digitized newspapers
covering the early history of Los Angeles. Her main problem was
that the newspapers were too large to fit on mainstream computer
screens. The high priority system capability included not just image
navigation and zoom-in/zoom-out; but image catalog and metadata
storage, update, search, and browse; image archive management;
and access administration capabilities. Lower priorities involved
potential additions for text search, usage monitoring, and trend
analysis.
Her manager, who served as the customer, had two top-priority
system constraints as her primary win conditions. One was to keep
the cost of the COTS product below $25K. The other was to get
reasonably mature COTS products with at least 5 existing supported
customers.
42
The student developer team’s top-priority constraint was to ensure
that the system’s Initial Operational Capability (IOC) was scoped to
be developable within the 24 weeks they had for the project.
P2: Relevant COTS products exist? In most cases, stakeholders are aware of
some available COTS packages that can provide part of all their needed
functionalities. Then a COTS assessment activity will help them to determine the
best option. However, the project will follow a custom development approach if the
stakeholders realize that there is no relevant COTS available.
For the OIV project, the original client was aware that some COTS
products were available to do this. She wanted the student developer
team to identify the best COTS product to use, and to integrate it
into a service for accessing the newspapers’ content, covering the
full system capability. Therefore, the process proceeds to enter P3
(Assess COTS candidates), which is elaborated in Section 3.3 and
Figure 3.6.
P4: Tailoring Required? When a certain COTS product can satisfy all the
OC&P’s, there is no need to develop application code or glue code. The selected
COTS may need to be tailored in order to work for the specific system context.
P5: Multiple COTS cover all OC&P’s? If a combination of COTS products
can satisfy all the OC&P’s, they are integrated via glue-code. Otherwise, COTS
packages are combined to cover as much of the OC&P’s as feasible and then
custom code is developed to cover what remains.
43
P6: Can Adjust OC&P's? When no acceptable COTS products can be
identified, the OC&P’s are re-examined for areas that may allow more options. Are
there constraints and priorities that may be relaxed? Can the objectives (easier to
modify than requirements) be modified to enable consideration of more products?
Are there analogous areas in which to look for more products and alternatives?
P8: Coordinate Application Code development and Glue Code effort.
Custom developed components must eventually be integrated with the chosen
COTS products. The interfaces will need to be developed so they are compatible
with the COTS products and the particular glue code connectors used. This means
that some glue code effort will need to be coordinated with the custom
development. This coordination needs to continue through the concurrent
development of custom code and glue code (P9, P10).
The framework in figure 3.5 looks sequential, but its elements support recursive
and reentrant use to support the frequent go-backs involved in CBA processes.
These can include emergence of new COTS candidates or releases; changes in
enterprise-wide COTS preferences; or emergence of important new users with
additional value propositions or OC&P’s. As discussed in Section 3.1.3, there are
frequently-occurring ATGC patterns or “genetic codes” characterizing CBA
processes [Port 04].
The framework and process elements can generate the development activity
sequences indicated in Table 3.1 by noting the order that these process elements are
visited. Each area may enter and exit in numerous ways both from within the area
44
itself or by following the process decision framework of Figure 3.5. In addition,
this scheme was developed from and is consistent with the CBA activity
distributions of Figures 3.2 and 3.3. In particular, only (and in fact all) “legal”
distributions are possible (e.g. that all distributions have assessment effort is
consistent with all paths in the framework initially passing through the assessment
element (or area “A”). Obviously, every anticipated sequence pattern discussed in
section 3.1.4 can be mapped to a path within the decision. Moreover, every
unanticipated sequence pattern is valid within the CBA framework with appropriate
project rationale.
Table 3.4 presents a mapping between PDF process areas and Spiral segments.
The rows consist of the four basic Spiral segments, and the columns list the
corresponding optional CBA process elements with respect to different MBASE
phases.
Table 3.6 Mapping Win Win Spiral segments to CBA PDFSpiral ID
Spiral Segment Inception Elaboration Construction Transition
1a Identify Stakeholder P1 P1 P1 P11b Identify OC&P’s,
alternativesP1, P2 P1, P2 P1, P2 P1, P2
2a Evaluate Alternatives P3 P3 P3 P32b Assess, Address Risks Risk
AnalyzerRisk Analyzer Risk Analyzer -
3 Elaborate P&P Definitions
P3, P4, P5, P3, P10, P11 P3, P8-11 -
4a Verify and Validate P&P
- - - -
4b (1a) Stakeholders’ Review and Commitment
P6 (A6) P4, P5, P6 (A6, T2, T4)
P4, P5, P6 (A6, T2, T4, G2)
-
45
3.4 Composable Process Elements
3.4.1 Assessment Process Elements (APE)Some recently proposed approaches to COTS assessment processes identify key
tasks and artifacts such as that in [Dorda 03, ISO 99, Lawlis 01], but fail to address
the concurrency and high coupling among COTS assessment, tailoring, and glue
code development. Figure 3.6 shows the CBA APE that provide these linkages.
Figure 3.10 Assessment Process Element (APE)
Entry Conditions
1. A set of stakeholder negotiated OC&P’s for the system
2. Available relevant COTS products.
Steps
The first-level steps involved by an Assessment process element include:
A1: Establish evaluation criteria, weights; Identify COTS candidates
A2: Initial Filtering:
document/literature review
A3: Prepare for detailed
assessment
A4: Detailed Assessment
A5: Collect data and analyze assessment
results
Market Trend Analysis
Remaining COTS candidates
Changes of COTS Vendor/Standards
P6: Can adjust OC&Ps?
No acceptable COTS-Based
Solution
A6: Clear Choice?
P5: Multiple COTS cover all OC&Ps?
No acceptable COTS-Based
Solution
Partial COTS solution best
Single Full-COTS solution satisfies all OC&Ps
COTS Assessment Background
(CAB)
COTS Assessment
Plan(CAP)
COTS Assessment
Report(CAR)
P4: Tailoring Required?
46
A1: Establish evaluation requirements. This includes establishing COTS
evaluation criteria and their corresponding weights, business scenarios, and COTS
candidates. These are derived from OC&P’s. Assessment checklists are provided
in [Abts 01, Boehm 00, Yang 04]
A2: Initial Filtering. Initial assessment tries to quickly filter out the
unacceptable COTS packages with respect to the evaluation criteria. The objective
of this activity is to reduce the number of COTS candidates needing to be evaluated
in detail. If no available COTS products pass this filtering, this assessment element
ends up at the “none acceptable” exit.
A3: Prepare for detailed assessment. Remaining COTS candidates will
undergo more detailed assessment, and in some cases, multiple rounds of detailed
assessment are needed to incorporate iteratively refined evaluation criteria.
Example preparation activities include vendor contact, obtaining COTS (or test
version), installation, and refining evaluation criteria and weights.
A4: Detailed assessment. The focus of detailed assessment is to apply various
techniques such as prototyping, black-box testing, interoperability testing, and
business case analysis on COTS candidates in order to evaluate their relative
fitness. COCOTS cost estimation model is integrated in A4 to support making
business case for deriving the Total Ownership Cost (TOC) associated with each
alternative solution.
Usually, in order to obtain more assured evaluation data, some COTS products
need to be tailored (e.g., to assess usability), and some need to be integrated by glue
47
code development (e.g., to assess interoperability). Therefore, the tailoring process
element (as discussed in Section 3.3.2) and glue code process element (as discussed
in 3.3.3) might be initiated separately or concurrently to support detailed
assessment.
A5: Collect evaluation data and analyze evaluation results. Data and
information about each COTS candidate will be collected and analyzed against
evaluation criteria in order to facilitate trade-offs and decision making. In this step,
a screening matrix or analytic hierarchy process is a useful and common approach
to analyze collected evaluation data.
A6: Clear COTS choice? This step establishes the exit direction from the
COTS assessment process.
Exit Conditions
The following three exit directions have been identified:
Single full COTS solution best: it means a single COTS product covering
all desired OC&P’s;
Partial COTS solution best: it means that either a single COTS product or a
combination of COTS products is selected, however, any COTS only covers
part of the OC&P’s, and custom development and/or glue code
development is needed to meet all desired OC&P’s;
No acceptable COTS: it means that pure custom development is the optimal
solution, unless the stakeholders are willing to adjust unsatisfied OC&P’s.
Relation to Spiral Model
48
With respect to the Win Win Spiral Model in Figure 3.4, there are frequently
two passes through steps A3, A4, and A5 in Figure 3.6. The first pass ends in a Life
Cycle Objectives (LCO) milestone at step A6, which stakeholders are provided
evidence that there does or does not exist at least one architecture and set of COTS
products providing a feasible solution to the stakeholders’ objectives and
constraints. The “does not exist” branch goes back to step P6 in the overall
framework to see if the OC&P’s can be adjusted to obtain a feasible and
satisfactory solution. If a single full-COTS solution is feasible and preferable, the
project has thereby passed its Life Cycle Architecture (LCA) milestone, and can
proceed into tailoring, productization, and deployment. Otherwise, the project still
has more detailed assessment and risk resolution to perform in a Spiral 2
culminating in an LCA milestone.
Supplemental Activity
Market trend analysis is often found to be a very useful and critical technique to
gather broader and up-to-date information for detailed assessment. For example,
use a market watch activity to get the latest information regarding COTS products
or standards, and to collect COTS information from its current users. This can also
yield new value propositions requiring value-added backtracking in the CBA PDF.
Process Guidelines
Three assessment process guidelines have been developed to support the
Assessment process element: the COTS Assessment Background (CAB), the COTS
Assessment Plan (CAP), and the COTS Assessment Report (CAR). These three
49
guidelines are extended from the MBASE guidelines [Boehm 01], which focuses
on a set of success model, process model, product model, and property model. The
CAB document provides the essential set of OC&P’s, and situation background
needed to perform the COTS assessment, which will be elaborated further shortly.
The CAP document covers the “why/whereas, what/when, who/where, how, and
how much” aspects of the activity being planned. The CAR document presents the
major results, conclusions, and recommendations. See [Yang 04] for the details of
these three guidelines.
The ISO/IEC 14598-4 standard [ISO 99] defines a set of artifacts with similar
goals. However, it is found that some specific guidance that is important to include
in the CAB, CAP, and CAR guidelines was not found in ISO/IEC 14598-4. These
include Results Chain Analysis, value-based OC&P’s identification, and process
planning to support concurrent COTS assessment, tailoring and glue code activities.
Table 3.7 presents the rough mapping of artifacts defined by Assessment process
element and ISO/IEC 14598-4, while the more important aspects of CBA
assessment guidelines will be exemplified through CAB guideline next.
Table 3.7 Comparing CBA APE to ISO/IEC 14598-4CBA Assessment Process Element ISO/IEC 14598-4COTS Assessment Background (CAB) Evaluation Requirement SpecificationCOTS Assessment Plan (CAP) Evaluation PlanCOTS Assessment Report (CAR) Evaluation Specification + Evaluation
Records and Results
More specifically, the following known approaches/techniques are adapted and
integrated by the CAB guideline.
50
Results Chain: CAB integrates the Results Chain to facilitate the elaboration
of system shared visions. The Result Chain provides a valuable framework
by which your project can work with your clients to identify additional non-
software initiatives that may be needed to realize the potential benefits
enables by the project initiative. These may also identify some additional
success-critical stakeholders who need to be represented and “brought into”
the shared vision.
New categories of system stakeholders: three new types of key stakeholders
are included: COTS vendor, domain expert, and COTS technical expert.
Each of these is very important for COTS assessment intensive projects.
Templates: Templates for describing system objectives, constraints, and
prioritized capabilities are extended from MBASE for COTS assessment.
OO Models: Use-Case Diagram is adopted to describe operational processes
within current organization.
COTS Assessment Attribute Checklist, which serves as a start point for
establishing evaluation criteria.
In order to improve the completeness and consistency of the document, CAB
also integrates the following traceability requirements among artifacts from
MBASE:
Organization goals should map to initiatives in Results Chain;
Business processes should trace back to organization goals;
System capabilities should map to services defined in system boundary.
51
These guidelines have been applied by 6 COTS Assessment-intensive projects
in 2003-5, and continuous refinements are made accordingly with respect to project
feedbacks. Qualitative observations include that the clients are happy with this new
set of deliverables, and the teams are more comfortable to apply these guidelines
than the general MBASE in doing COTS assessment type of projects. Further
work will involve validating through quantitative results.
3.4.2 Tailoring Process Element (TPE)In most cases, COTS packages need to be tailored in order to work in a specific
system context. While several COTS products may be tailored simultaneously, one
tailor element has to be initiated for each. The TPE is illustrated in Figure 3.7.
Entry Conditions
The entry conditions include the COTS package that needs tailoring and the
activity that initiates the tailoring.
Figure 3.11 Tailoring Process Element (TPE)
For example, the COTS may be under consideration by the assessment element;
be adapted and ready for integration with other component; or be fully assessed but
Alternate COTS selections
T1: Identify tailoring methods available for the selected COTS
components
T3: Perform tailoring effort vs functionality trade-off analysis
T2: Clear best tailoring method?
T5: Design and Plan tailoring using best
available tailoring method
No
Yes
Yes
T4: Tailoring-functionality trade-off feasible to satisfy OC&Ps?
No
A4: Detailed Assessment
T6: Perform Tailoring
A4: Detailed Assessment
G4: Develop glue code and integrate
P12: Productize, Test and Transition
52
needing some specialization for the particular application. This helps to determine
the exit direction later.
Steps
T1: Identify Tailoring Options. Tailoring options include GUI operations,
parameter setting, or programming specialized scripts. A COTS product may have
multiple tailoring options; in such cases the decision must be made as to what
capabilities are required to be implemented by which option. As shown in Table
3.6, tailoring options may include GUI operations, parameter setting, or
programming specialized scripts.
Table 3.8 Tailoring methods and common evaluation parameters OptionsEval.Parameters
GUI Based Parameter Based Programmable
Design Details Low - None Low DetailedComplexity Low - Moderate Moderate High
Adaptability Low Low - Moderate High
Developer Resources Low Low - Moderate Moderate - High
Example
WYSIWYG1 HTML Editors Windows Media Player
Browsers - Java Scripts
J2EEWindows Media EncoderMicrosoft Word 2000, ERP packages
T2: Clear Best Choice? The best choice can either be enforced by the COTS,
i.e. products supporting a single tailoring method, or by the developers’ tailoring
knowledge and skills. If a dominant COTS tailoring option is found then the
developers can proceed to design and plan tailoring following that clear option
1 What You See Is What You Get53
(T5). If there are still multiple tailoring options, the developers need to evaluate for
the best option (T4).
T3: Perform tailoring effort vs. functionality trade-off. When there is no
single best choice of tailoring method, the team must perform trade-off analyses
between the effort required by available tailoring methods and the functionality that
the team hopes to achieve via tailoring. It is not uncommon, even in the same
product domain, for tailoring effort to significantly vary depending upon the COTS
package selected. Automated tools, such as Microsoft Excel’s macro recorder can
significantly reduce the tailoring effort. Another factor that the teams must consider
whilst performing the tradeoff is the amount of re-tailoring that will be required
during a COTS refresh cycle. The integration of COCOTS cost estimation model
provides a supporting mechanism to perform this type of tradeoff analysis through
its Tailoring sub model parameters [Abts 01]. More specifically, the Tailoring
Complexity Quantifier in COCOTS Tailoring sub model evaluates required amount
of effort to implement a particular design, the complexity of tailoring needed, need
for adaptability and compatibility with other COTS tailoring choices. Then the
developers can make decisions based on available resources.
T4: Tailoring-functionality trade-off feasible? The goal of this step is
reviewing the trade-off analyses results to obtain stakeholder commitment, before
preparing to do the actual tailoring.
T5: Design and Plan tailoring. While tailoring may be straightforward for
non-distributed, single-solution systems, it can be extremely complex for large
54
distributed applications, such as Enterprise Resource Planning packages. In such
cases it is recommended for teams to plan the tailoring. Formats of planning may
vary from a simple checklist of steps to a complete set of UML diagrams
(programmable tailoring).
Exit Conditions
COTS package is parameterized, customized, configured, or some scripts are
written to tune the COTS ready for detailed assessment, glue code development, or
final productization.
3.4.3 Glue Code Process ElementIn very few cases, the combination of COTS components and application
components being integrated or assessed will easily plug-and-play together. If not,
some glue code needs to be defined and developed to integrate the components, and
some evaluation needs to be performed to converge on the best combination of
COTS, glue code, and application code for the solution to reduce the risk of
integrated components failing to perform as expected. Figure 3.8 illustrates the
activities and decisions made when working with glue code.
F
i
g
u
re 3.12 Glue Code Process Element (GPE)
G1: Architect and Design Glueware
G2: Architecture Feasible? Yes
P12: Productize, Test and Transition
G3: Tailoring Required
No
YesT
G4: Develop glue code and integrateNo
A4: Detailed Assessment
Alternate COTS combinations
No acceptable COTS-Based
Solution
P6: Can adjust OC&Ps?
P9: Develop custom code
55
Entry Conditions
The primary entry conditions are a combination of COTS products that require
glueware and/or custom code for successful operation, and the activity that initiates
the glue code element. The later condition helps to determine the exit direction.
Steps
G1: Architect and Design Glueware. The first step in the glue-code process
element involves the development of architecture and design of glueware that will
be required to integrate the application. Major architectural considerations include
determination of interconnection topology options to minimize the complexity of
interactions, selection of connectors [Mehta 00] (e.g. events, procedure calls, pipes,
shared memory, DB, etc.), support for connectors and architectural styles provided
by COTS package interfaces, and architectural mismatches between COTS
packages [Garlen 95, Abd-Allah 96]. [Gacek 98] describes 23 architectural
mismatches that were classified according to dynamism, data transfer, triggering,
concurrency, distribution, layering, encapsulation, and termination features.
Besides these, it also classifies different types of connectors including call, spawn,
data connector, shared data, triggered call, triggered spawn, triggered data transfer,
and shared machine. It provided a foundation to identify architecture mismatches.
An example approach to evaluate COTS interoperability is provided in [Bhuta 05].
G2: Architecture Feasible? This step focuses on acquiring stakeholder
commitment to a given architecture. If it is determined that no architecture can
integrate the selected set of COTS packages within project constraints, the team
56
may need to revisit the assessment process element to identify an alternate
combination of packages.
G3: Tailoring Required? The team at this point needs to determine if tailoring
will be required to satisfy system OC&Ps. If tailoring is required, the project will
carry out the tailoring process element to meet system objectives. These may be
selective as for the supply chain assessment prototype, or incremental in an
incrementally-fielded application.
G4: Develop Glue Code and Integrate. The final step involves the
implementation of the connector infrastructure, development of appropriate
interfaces (simultaneously with application code interfaces if necessary as indicated
within the application code process), and integration of components to build the
application.
Exit Conditions
Glue code among selected COTS packages and/or custom components is
developed and ready for detailed production, testing, and transition. If no COTS
combinations can be feasibly integrated via glue code, either the OC&P’s need to
be adjusted or a custom solution pursued (P6).
3.5 Generating Composable Process instances
This section provides more details about how to use the proposed solution and
Win Win Spiral Model to generate composable CBA processes through the
example “Oversized Image Viewer” (OIV) project in its first three Spiral cycles.
57
3.5.1 Project DescriptionIn the OIV project, the original client needed a system to support viewing of
digitized collections of old historical newspapers, but other users became interested
in the capability for dealing with maps, newspapers, art works and other large
digitized images. The full system capability included not just image navigation and
zoom-in/zoom-out; but image catalog and metadata storage, update, search, and
browse; image archive management; and access administration capabilities. Several
COTS products were available for the image processing functions, each with its
strengths and weaknesses. None could cover the full system capability, although
other COTS capabilities were available for some of these. As the initial operational
capability (IOC) was to be developed as a student project, its scope needed to be
accomplished by a five–person development team in 24 weeks. It will be described
in appropriate places how the CBA Process Decision Framework and the flexible
use of composable process elements were applied in the OIV example.
3.5.2 Process DescriptionThe process description provided here for the Oversize Image Viewer (OIV)
project covers the project’s first three spiral cycles. Each cycle description begins
with its use of the WinWin Spiral Model, as the primary sequencing of tasks is
driven by the success-critical stakeholders’ win conditions and the project’s major
risk items.
It shows that the framework is not used sequentially, but can be re-entered if
the Win Win Spiral risk patterns cause a previous COTS decision to be
reconsidered. The resulting CBA decision sequence for the OIV project was a 58
composite process, requiring all four of the Assessment (A), Tailoring (T), Glue
Code and Integration (G), and Custom Development (C) process elements.
3.5.3 Spiral Cycle 1Table 3.7 shows the major spiral artifacts and CBA PDF activities in the OIV
project’s first spiral cycles that identify different process/product alternatives.
Table 3.9 OIV Spiral Cycle 1 – Alternative identificationSpiral Artifact Where WhatStakeholders P1 Developer, customer, library-user client, COTS vendors
OC&P’s P1 Image navigation, cataloguing, search, archive and access administrationCOTS cost £ $25K, ³ 5 user organizations IOC developable in 24 weeks
Alternatives P1, P2 ER Mapper, Mr SID, Systems ABC, XYZEvaluation; Risks XYZ > $25K; ABC < 5 user org’s
ER Mapper, Mr SID acceptableRisk: picking wrong product without exercise
What to do next?
Table 3.8 illustrates how the risk consideration drives the process sequencing
for next Spiral cycle by composing corresponding process elements to avoid the
risk of picking wrong product.
Table 3.8 also shows that the Spiral cycle 1 ended with a new decision (at time
P6) to revisit Assessment with likely new OC&P’s emerging from other-OIV-user
stakeholders as evaluation criteria. Thus we can see that the CBA decision
framework is not sequential, but needs to be recursive and reentrant depending on
risk and OC&P decisions made within the Win Win Spiral process.
Table 3.10 OIV Spiral Cycle 1 – Evaluation and elaboration
59
Spiral Artifact What Next process elementRisk Addressed Exercise ER Mapper, Mr SID APE for newspaper image filesRisk Resolution ER Mapper image navigation,
display strongerProduct Elaboration Use ER Mapper for image
navigation, displayProcess Elaboration Tailor ER Mapper for library-user
Windows clientTPE for ER Mapper
Product Process Customer’s new objective: want campus-wide usage, support of Unix, Mac platforms
P5
Commitment Customer will find Unix, Mac user community representatives
P6
Therefore, plans were made to tailor ERMapper as a partial COTS solution for
the overall product solution, and integrate it with other COTS and/or application
code, as ER Mapper was not a complete application solution for such functions as
cataloguing and search. When the customer reviewed these plans, however, she felt
that the investment in a campus OIV capability should also benefit other campus
users, some of whom worked on Unix and Macintosh platforms. She committed to
find representatives of these communities to participate in a re-evaluation of ER
Mapper and Mr. SID for campus-wide OIV use. The client and developers
concurred with this revised plan for spiral cycle 2.
This changed value proposition required the project to backtrack from step P5
(multiple COTS cover all OC&P’s) back to step P3 (Assess COTS candidates) in
the CBA PDF in Figure 3.5.
3.5.4 Spiral Cycle 2Spiral 2 begins by revisiting additional stakeholders and further market trend
analysis, which may lead to revisiting the LCO COTS solution as discussed above
for the OIV project. For the OIV function, ER Mapper was filtered out without
60
further evaluation when it declined to guarantee early Unix and Mac versions.
Some tailoring was required to verify that Mr. SID performed satisfactorily on
Unix and Mac platforms. Table 3.9 shows the major spiral artifacts and CBA PDF
activities in the OIV project’s second spiral cycles.
Table 3.11 OIV Spiral Cycle 2 – Alternative identificationSpiral Artifact Where WhatStakeholders P1 Additional user representatives (Unix, Mac
communities) OC&P’s P1 System usable on Windows, Unix, and Mac
platforms Alternatives P1, P2, ER Mapper, Mr SID Evaluation; Risks P3(A, T) ER Mapper Windows-only; plans to support
Unix, Mac; schedule unclearMr SID supports all 3 platformsRisk of Unix, Mac non-support
What to do next?
Table 3.12 illustrates how the risk consideration drives the process sequencing
for next cycle by composing appropriate CPEs.
Table 3.12 OIV Spiral Cycle 2 – Evaluation & Elaboration Spiral Artifact What Next process elementRisk Addressed Ask ER Mapper for guaranteed Unix,
Mac support in 9 monthsAPE for newspaper image file
Risk Resolution ER Mapper: no guaranteed Unix, Mac support even in 18 months
Product Elaboration Use Mr SID for image navigation, MySQL for catalog support, Java for admin/GUI support
Process Elaboration Prepare to tailor Mr SID, My SQL to support all 3 platforms
P4, P5, APE, TPE for Mr. SID
Product Process Need to address Mr SID/My SQL/Java interoperability, glue code issues; GUI usability issues
APE for the cataloguing and GUI functions
Commitment Customer will buy Mr SIDUsers will support GUI prototype evaluations
P4, P5, P6
Concurrently, Assessment filtering and evaluation tasks were being performed
for the cataloguing and GUI functions. This concurrency is a necessary attribute of
61
most current and future CBA processes. Simple deterministic process
representations are simply inadequate to address the dynamism, time-criticality,
and varying risk/opportunity patterns of such CBA’s.
3.5.5 Spiral Cycle 3Table 3.11 shows the major spiral artifacts and CBA PDF activities in the OIV
project’s third spiral cycle that drives the process sequencing.
Table 3.13 OIV Spiral Cycle 3 – Alternative identificationSpiral Artifact Where WhatStakeholders P1 Additional end-users (staff, students) for usability
evaluation OC&P’s P1 Detailed GUI’s satisfy representative users Alternatives P1, P2 Many GUI alternatives Evaluation; Risks P3(A, T) Risk of developing wrong GUI without end-user
prototyping Mr SID/MY SQL/Java interoperability risks
What to do next?
Table 3.14 OIV Spiral Cycle 3 – Evaluation & Elaboration Spiral Artifact What Next process elementRisk Addressed Detailed assessment by prototype full
range of system GUI’s, Mr SID/My SQL/Java interfaces
A, T, G for detailed interoperability of Mr. SID, MySQL, and the GUI software on the Windows, Unix, and Mac platforms.
Risk Resolution Acceptable GUI’s, Mr SID/My SQL/Java glue code
Product Elaboration
Develop production Mr SID/My SQL/Java glue code
Process Elaboration
Use Schedule as Independent Variable (SAIV) process to ensure acceptable IOC in 24 weeks
A (T, G): detailed assessment that involves T and G
Product Process Need to prioritize desired capabilities to support SAIV
P5, P8-10, P12
Commitment Customer will commit to post-deployment support of software Users will commit to support training, installation, operations
A, T, G for detailed interoperability of Mr. SID, MySQL, and the GUI software on the Windows, Unix, and Mac platforms.
62
Table 3.12 illustrates how the risk consideration drives the process sequencing
by composing an APE that involves TPE and GPE to further assess interoperability
characteristics of Mr. SID, MySQL, and the GUI software on the Windows, Unix,
and Mac platforms.
Subsequent spiral cycles to develop the core capability and the IOC did not
involve further Assessment, but involved concurrent use of the Tailoring, Glue
Code, and custom development processes, which will not be further discussed (see
[Boehm 03] for details about this case study).
The OIV example demonstrates that the Win Win spiral process provides a
workable framework for dealing with risk-driven concurrency, and the composable
CBA decision framework and process elements provide workable approaches for
handling the associated CBA activities. The dynamism and concurrency makes it
clear that the CBA process elements need to be recursive and reentrant, but they
provide a much-needed structure for managing the associated complexity.
63
Chapter 4 COCOTS Risk Analyzer
The CBD Process Decision Framework enables CBD projects to generate
flexible process instances that best fit their project situation and dynamics.
However, making optimal process decisions is never an easy job, and it requires a
comprehensive evaluation of COTS costs, benefits, and risks within the feasibility
analysis [Boehm 99b] and project business case [Reifer 01].
Good project planning requires the use of an appropriate process model as well
as effective decision support techniques. This chapter presents a risk based
prioritization approach that is used in the context of CBD Process Decision
Framework (PDF) with the support of COCOTS Risk Analyzer. It enables the user
to obtain a COTS glue code integration risk analysis with no inputs other than the
set of glue code cost drivers the user submits to get a glue code integration effort
estimate with the COnstructive COTS integration cost estimation (COCOTS) tool.
Section 4.1 gives a short background on COCOTS model on which the method
is based. Section 4.2 discusses the method and its steps. Section 4.3 introduces the
implementation of the method. Section 4.4 examines the performance of its
application to two sets of COTS integration projects.
4.1 Background
Most risk analysis tools and techniques require the user to enter a good deal
of information before they can provide useful diagnoses. For example, current CBD
risk assessment techniques are largely checklist-based [Rashid 01] or
64
questionnaire-based [Carney 03], which requires large amount of user input
information and also subjective judgment of domain experts. These techniques are
human intensive and then often difficult due to scarcity of seasoned experts and the
unique characteristics of individual projects. Where possible risk assessment should
be supported by automated processes and tools which are based on project
attributes that can be quantitatively measured using project metrics.
Cost estimation models, e.g. the widely used COCOMO II model for custom
software development and its COTS extension -- COCOTS model, incorporate the
use of cost drivers to adjust development effort and schedule calculations. As
significant project factors, cost drivers can be used for identifying and quantifying
project risk. A rule-based heuristic risk analysis method and its implementation
have been developed and introduced in [Madachy 97], where rules were structured
around COCOMO II cost factors, reflecting an intensive analysis of the potential
internal relationships between cost drivers. Another technique for integrating risk
assessment with cost estimation attributes is discussed in [Kansala 97].
The work in this study is based on our previous work on the COnstructive
COTS integration cost estimation model (COCOTS), esp. its Glue Code sub-model,
and analysis of COTS integration risk profiles. The cost factors defined in
COCOTS model are used in our risk assessment method to develop and identify
risk patterns, in terms of cost driver combinations, in association with particular
risk profiles within COTS-based development. The following sections give quick
65
overview on COCOTS model and COCOTS Glue Code sub-model. For more
details, the reader is referred to [Abts 01].
4.1.1 COCOTS modelAs briefly introduced in Section 2.2, while the COCOTS model was developed
with the intention to determine the economic feasibility of COTS-based solutions,
its model structure consists of most, if not all, of the important project
characteristics that should be carefully examined when assessing CBD project risk.
One such example is the risk assessment method in [Boehm 03c] using the set of
COTS assessment attributes defined in COCOTS Assessment sub-model to help
determining the “sweet spot” amount of COTS assessment effort to balance the
project’s “too many errors” risk as well as its “late delivery” risk.
The COCOTS Glue Code sub-model is a good starting point for developing a
risk analyzer to assess CBD project risks. This is because among the three sub-
models of COCOTS, the formulation of the Glue Code sub-model uses the same
general form as does COCOMO, i.e. a well-defined and calibrated set of cost
attributes which capture the critical aspects of project situation.
4.1.2 Glue Code Sub-model in COCOTSThe COCOTS Glue Code sub-model presumes that the total amount of glue code to
be written for a project can be predicted and quantified in KSLOC (thousands of
source line of code), and the broad project conditions in terms of personnel,
product, system, and architecture issues can be characterized by standardized rating
66
criteria. Based on these, fifteen parameters have been defined in the COCOTS Glue
Code sub-model as shown in Table 4.1.
Table 4.15 COCOTS Glue Code sub-model cost driversCost Factors
Name Definition
Size Driver
Glue Code Size
The total amount of COTS glue code developed for the system.
Scale Factor
AAREN Application Architectural Engineering
Cost Drivers
ACIEP COTS Integrator Experience with ProductACIPC COTS Integrator Personnel CapabilityAXCIP Integrator Experience with COTS Integration ProcessesAPCON Integrator Personnel ContinuityACPMT COTS Product MaturityACSEW COTS Supplier Product Extension WillingnessAPCPX COTS Product Interface ComplexityACPPS COTS Supplier Product SupportACPTD COTS Supplier Provided Training and DocumentationACREL Constraints on Application System/Subsystem ReliabilityAACPX Application Interface ComplexityACPER Constraints on COTS Technical PerformanceASPRT Application System Portability
Except the size driver, Glue Code Size, which is measured by thousand
source line of code (KSLOC), all other cost factors are defined using a 5-level
rating scale including Very Low, Low, Nominal, High, and Very High ratings.
Therefore, a COCOTS estimation input is generally composed by the size input in
KSLOC, and a specific symbolic rating ranging from Very Low to Very High for
each of the rest 14 cost factors.
4.2 Modeling the COCOTS Risk Analyzer
To construct a CBD risk analyzer from the cost factor ratings in a COCOTS
estimation input, we start with knowledge engineering work by formulating risk
rules and quantification scheme from COCOTS cost drivers and iteratively eliciting 67
expert knowledge. The workflow steps in the risk assessment method are illustrated
in Figure 4.1 and discussed in detail next starting with the central knowledge base.
Figure
4.13 COCOTS Risk Analyzer Workflow
4.2.1 Constructing the knowledge baseAs shown in Figure 4.1, the knowledge base consists in the following three
elements, risk rules, risk level scheme, and mitigation strategy. To construct the
knowledge base, a Delphi survey was developed to collect experts’ judgment on
risky combinations of COCOTS cost factors and their quantification weighting
scheme. To date, the survey received responses from 5 domain experts and
established a risk rule table and risk level weighting scheme based on these Delphi
feedbacks. The risk mitigation strategy is drafted based upon our previous risk
Knowledge Base
Knowledge BaseRisk Rule s
Ri sk Level Scheme
Mitigation Strategy
User
1. Identify CostFactor’s Risk
Potential Rating
3. Evaluate Risk Probability
4. Analyze RiskSeverity
2. IdentifyRisks 5. Asse ss
Overall Risk
6. Provide Risk Mitigation
Advice s
Input (Cost Factor Ratings)
Output (Risk Summary )
Knowledge Base
Knowledge BaseRisk Rule s
Ri sk Level Scheme
Mitigation Strategy
User
1. Identify CostFactor’s Risk
Potential Rating
3. Evaluate Risk Probability
4. Analyze RiskSeverity
2. IdentifyRisks 5. Asse ss
Overall Risk
6. Provide Risk Mitigation
Advice s
Input (Cost Factor Ratings)
Output (Risk Summary )
68
analysis work in [Boehm 03c], aiming to provide specific mitigation plan advice
with respect to the particular risk items identified by the method, especially in the
early phase of the project.
Risk rulesThe approach describes a CBD risk situation as a combination of two cost
attributes at their extreme ratings, and formulates such combinations into risk rules.
One example is a project condition whereby COTS products complexity (APCPX)
is very high and the staff’s experience on COTS products (ACIEP) is low. In such
case, cost and/or schedule goals may not be met, since time will have to be spent
understanding COTS, and this extra time may not have been planned for. Hence, a
corresponding risk rule is formulated as:
IF ((COTS Product Complexity > Nominal)AND (Integrator’s Experience on COTS Product < Nominal))THEN there is a project risk.
Figure 4.2 lists results of risk situation identification from the 5 Delphi survey
responses. The shaded table entries represent the risky combinations of COCOTS
cost factors identified in the Delphi responses. Different cell shading patterns
correspond to different percentage of responses over the total responses. For
example, there are 24 risk combinations identified by more than half of the
responses (i.e. identified by at least 3 experts in our case, shading pattern of
“>=50%” in the table). Another 25 more risk combinations are identified in two of
the five responses (shading pattern of “40%” in the table). The 20% shading pattern
refers to those risk combinations only identified in one of the five responses.
69
ASP
RT
AC
PER
AA
CPX
AC
REL
AC
PTD
AC
PPS
APC
PX
AC
SEW
AC
PMT
APC
ON
AXC
IP
AC
IPC
AC
IEP
AA
REN
SIZE
SIZEAAREN ACIEP ACIPC AXCIP APCON ACPMT ACSEW APCPX ACPPS ACPTD ACREL AACPX ACPER ASPRT
ASP
RT
AC
PER
AA
CPX
AC
REL
AC
PTD
AC
PPS
APC
PX
AC
SEW
AC
PMT
APC
ON
AXC
IP
AC
IPC
AC
IEP
AA
REN
SIZE
>=50% 40% 20%
Figure 4.14 Delphi results of risk situationsTo be more conservative, the method uses only the 24 risk combinations that
were recognized in more than half of the Delphi responses as the foundation to
formulate the risk rules for the knowledge base. Note that there is a symmetric risk
matrix relationship between the COCOTS cost factors. Therefore the region under
the diagonal line remains blank.
Another observation from Figure 4.2 is that some cost factors such as Size,
AAREN, ACREL, APCPX, ACIPC, and ACPMT, are the “most sensitive factors”
that can easily have risky interactions with others see the shaded area in the table).
Also note that interactions of cost attributes that are essentially orthogonal to each
other are not identified as risk situations.
70
The other two elements of the knowledge base, risk level scheme and
mitigation strategy will be introduced later when the workflow continues on to
bring in the appropriate context for them.
4.2.2 Identifying cost factor’s risk potential ratingAs discussed in section 4.1.2, a COCOTS glue code estimation input is a vector
of 15 values, one numeric value carrying the size estimate in KSLOC, and 14
symbolic values representing the specific ratings ranging from Very Low to Very
High for each of the other 14 cost factors.
In order to capture and analyze the underlying relation between cost attributes
and the impact of their specific ratings on project risk, some transformation work
on the inputted cost driver ratings is necessary. These include the resolving the
difference between continuous representation of the size driver and the discrete
representation of the other 14 cost factors, as well as establishing a mapping
mechanism between cost driver ratings and the probability of risk caused by these
ratings. Through these treatments, the risk assessment method can be developed
and implemented on an automatable means that can be evaluated systematically
with little involvement of human subjective measures besides the COCOTS ratings
being supplied. Next we introduce these treatments on the inputted COCOTS cost
factors ratings.
Deriving risk potential rating for inputted size To determine risk potential ratings for the inputted glue code size driver, the
method asks each domain expert to discretize the “Glue Code Size” driver into a 4-
level risk potential rating schema and the responses are listed in Table 4.2. The 4
71
levels of risk potential rating are defined as OK, Moderate, Risk Prone, and Worst
Case, with an increasing probability of causing problems when conflicting with
other cost factor ratings. The median values for each rating level are used in our
knowledge base to derive risk potential rating from an inputted size number.
Table 4.16 Delphi Responses for Size Mapping (Size: KSLOC)Risk Potential Rating OK Moderate Risk Prone Worst CaseDelphi Response 1 1 2 10 50Delphi Response 2 2 5 10 25Delphi Response 3 1 3 10 10Delphi Response 4 1 2 10 50Delphi Response 5 1 2 10 50
Median 1 2 10 50Stdev. 0.447 1.304 0 18.574
Therefore, the continuous representations in KSLOC are transformed into
symbolic ones for simplicity to model in our study, as well as for generality to
compare and analyze together with the other 14 cost attributes.
Deriving risk potential ratings for the other cost factorsThis transformation is more straightforward compared with the handling on size
driver, because the other 14 cost factors are inputted based on the 5-level rating
scheme, from Very Low to Very High. The same 4-level risk potential rating
scheme is used to distinguish the high or low risk potential of each driver from its
specific cost driver rating. Table 4.3 shows the detailed mapping schema for the
other 14 cost factors with respect to the influences of their cost driver ratings on
project risk.
72
Table 4.17 Mapping between cost factor rating to risk potential ratingCost Factors Cost Factor Rating Risk Probability Rating
Very Low Worst CaseLow Risk Prone
Nominal ModerateHigh OK
Very High OKVery Low OK
Low OKNominal Moderate
High Risk ProneVery High Worst Case
AAREN, ACIEP, ACIPC, AXCIP,
APCON, ACPMT, ACSEW, ACPPS,
ACPTD
APCPX, ACREL, AACPX, ACPER,
ASPRT
4.2.3 Identifying project risk situations With the 24 risk rules formulated and stored in the knowledge base and the
transformed cost driver’s risk potential ratings, the method will automatically
check, match, and generate a list of risk situations for the specific project.
4.2.4 Evaluating the probability of risk Risk impact is a term defined to measure the potential impact of certain risk,
and the equation to calculate risk impact is the probability of loss multiplying the
cost of loss due to the risk occurring. This section and the next two sections will
discuss how the individual risk is quantified in terms of evaluating its probability
and severity, and how the overall project is aggregated from individual risks.
Risk probability weighting schemeFor an identified risk situation, different rating combinations are evaluated to
determine the relative probability level of risk. For instance, it is perceivable that a
project with a Worst_Case APCPX and a Worst_Case ACIEP is having a greater
probability to run into problems than one just with a Risk_Prone APCPX and a
Moderate ACIEP.
73
Corresponding to the 4 level risk potential rating scheme, we define 4 levels of
risk probability as visualized in Table 4.4 where each axis is the risk potential
ratings of a cost attribute. A risk situation corresponds to an individual cell
containing an identified risk probability. In this step, risk rules use cost driver’s risk
potential ratings to index directly into these tables of risk probability levels.
Table 4.18 Assignment of Risk Probability Levels
Worst Case Risk Prone Moderate OKWorst Case Severe Significant General
Attribute 2 Risk Prone Significant GeneralModerate GeneralOK
Attribute 1
To quantify risk probability levels, a quantitative nonlinear weighting scheme
obtained from the expert Delphi survey is used to assign a risk probability level of
0.1 for general risk,
0.2 for significant risk, and
0.4 for severe risk.
Additionally, a 0 is used as the probability of the blank cells in Table 4.4,
situations where the risk is unlikely to occur.
4.2.5 Analyzing the severity of risk Besides the probability of risk occurring, we need to develop a means to
represent and evaluate the relative severity of risk occurring. One key term used
here is the productivity range of cost factors.
In COCOMO II family, the productivity range (PR) for a cost driver is defined
as the ratio between its highest and lowest values (when treated as an effort
74
multiplier). For a scale factor, the productivity range is calculated using a different
formula, with raising a power of the difference between its highest and lowest
values to a default project size of 100KSLOC.
1.14
1.22
1.22
1.42
1.43
1.48
1.48
1.69
1.79
1.80
2.09
2.10
2.51
2.58
0.00 0.50 1.00 1.50 2.00 2.50 3.00
ASPRT
ACSEW
ACPER
AXCIP
ACPTD
ACREL
ACPPS
AACPX
ACIEP
APCPX
AAREN
ACPMT
APCON
ACIPC
Cos
t Fac
tor
Productivity Range
Figure 4.15 Productivity range of COCOTS cost factors
Following these definitions, the productivity ranges of 14 COCOTS cost factors
except size are calculated and shown in Figure 4.3. Based on the experience with
the COCOMO risk analyzers, the productivity range is a useful measurement to
reflect the cost consequence of risk occurring, since it combines both expert
judgment and industry data calibration during the development of COCOTS model.
For the glue code size driver, a number of 2 is also obtained from the expert
Delphi analysis as its productivity range for evaluating risks involving size.
75
4.2.6 Assessing overall project riskThe overall project risk can be quantified following equations:
Figure 4.16 Risk quantification equation
In the above formula, riskprobij corresponds with the nonlinear relative
probability of the risk occurring as discussed in Section 4.2.4, and the product of
PRi and PRj represents the cost consequence of the risk occurring as discussed in
Section 4.2.5. Project risk is then computed as the sum risk impact of individual
risks.
To interpret the overall project risk number, we use a normalized scale from 0
to 100 with benchmarks for low, moderate, high, and very high overall project risk.
The scale is designed as follows: 0-5 low risk, 5-15 moderate risk, 15-50 high risk,
and 50-100 very high risk. The value of 100 represents the situation where each
cost driver is rated at its most expensive extremity, i.e. its worst case rating.
4.2.7 Providing risk mitigation advicesFinally, risk mitigation advice as discussed in our previous work in [Boehm 03,
Port 04] can be provided to the user with respect to each risk combinations
identified by the risk analyzer. Table 4.5 lists the risk mitigation advice for the 24
risk rules defined in the current knowledge base.
This list is by no means a comprehensive list, however, it is particularly useful
in supporting inexperienced CBA developers to improve their risk management
76
practice, and eventually support their process decision-making with respect to
particular project situations.
Table 4.19 Risk mitigation advicesNO. Driver 1 Driver 2 Description Mitigation
1 SIZE ACRELSignificant integration with high reliability requirement
Reliability-enhancing COTS wrappers; risk-based testing
2 SIZE APCPXSignificant integration with complex COTS interaces
Consider more compatible COTS
3 SIZE ACPMTSignificant integration with immature COTS Consider more mature COTS
4 ACREL ACIPC
High-reliability application dependent on incapable COTS integrator
Reliability-enhancing COTS wrappers; risk-based testing; re-staffing; training; consultant mentoring
5 ACREL ACPMTHigh-reliability application dependent on immature COTS
Consider more mature COTS; reliability-enhancing COTS wrappers; risk-based testing
6 ACREL AAREN
High-reliability application dependent on unvalidated architecture
Reliability-enhancing COTS wrappers; risk-based testing
7 APCPX ACIPCComplex integration with inexperienced personnel
Consider more compatible COTS; re-staffing; training; consultant mentoring
8 APCPX AARENComplex integration with unvalidated architecure
Consider more compatible COTS
9 APCPX AACPXComplex integration with complex application
Consider more compatible COTS
10 APCPX ACIEPComplex integration with inexperienced integrator
Consider more compatible COTS
11 ACIPC AARENPersonnel capability shortfall with unvalidated architecure
Re-staffing; training; consultant mentoring; Benchmark current and alternative COTS choices
12 ACIPC AACPXPersonnel capability shortfall with complex application
Re-staffing; training; consultant mentoring
13 ACIPC APCONPersonnel capability shortfall with personnel incontinuity
Re-staffing; training; consultant mentoring
14 ACIPC AXCIP
Personnel capability shortfall with unfamiliar integration process
Re-staffing; training; consultant mentoring
15 ACIPC ACPTD
Personnel capability shortfall with lack of vendor training/documentation
Re-staffing; training; consultant mentoring
16 ACPMT AARENImmature COTS with unvalidated architecture
Consider more mature COTS; Benchmark current and alternative COTS choices
17 ACPMT ACPPSImmature COTS with lack of vendor support Consider more mature COTS
77
18
ACPMT AXCIPImmature COTS with lack of COTS integration experience
Consider more mature COTS; Re-staffing; training; consultant mentoring
19 AAREN AACPXUnvalidated architecture with complex application
Benchmark current and alternative COTS choices
20 AAREN AXCIP
Unvalidated architecture with lack of COTS integration experience
Benchmark current and alternative COTS choices
21 AAREN ASPRT
Unvalidated architecture with high system portability requirement
Benchmark current and alternative COTS choices
22 AAREN ACPERUnvalidated architecture with COTS performance shortfalls
Benchmark current and alternative COTS choices; reassess performance requirements vs. achievables
23 APCON AXCIPPersonnel discontinuity with lack of integration experience
Re-staffing; training; consultant mentoring
24 APCON ACPTD
Personnel discontinuity with lack of vendor training/documentations
Re-staffing; training; consultant mentoring
4.3 Prototype of COCOTS Risk Analyzer
Since the manual analysis of the above steps is labor intensive and error-prone,
a prototype of COCOTS risk analyzer has been developed to automate the risk
assessment method discussed in Section 4.2. It is currently implemented on top of
MS Access Database.
The knowledge base is implemented through 3 tables: Risk Rule table, Risk
Probability Scheme table, and Cost Driver Productivity Range table. There is also a
Project Parameter table used to store project parameters that user submits for
COCOTS estimation. Figure 4.5 and Figure 4.6 show the screenshots of the input
and output interface of the prototype.
Table 4.5: Continued
78
Figure 4.17 Input interfaceAs shown in Figure 4.6, the total risk of 100 refers to the worst case project
where each cost factor input is at its worst case rating (i.e. either VH or VL rating
in Figure 4.5). The list of risk situations is very helpful especially for inexperienced
project manager or projects in early inception phase to identify risks that an
experienced software manger might recognize, quantify, and prioritize during
project planning.
Figure 4.18 Output interface
79
4.4 Evaluations
Evaluation of the risk assessment method has involved analysis of 9 small USC
e-services CBA projects and 7 large industry CBA projects to apply the method.
Some characteristics of both groups are compared in Table 4.6.
Table 4.20 Comparison of USC e-services and industry projectsUSC e-services Industry
Domain
Web-based campus-wide e-services applications such as library services
Generally large scale comminication, control systems
# COTS 1 ~ 6 1 ~ 53Duration 24 weeks 1 ~ 56 monthsEffort 6 person by 24 weeks 1 ~ 1411 person-monthSize 0.2 ~ 10 KSLOC 0.1 ~ 390 KSLOC
4.4.1 USC e-services projectsBased on the COCOTS estimation reports of the 9 USC e-services projects, the
COCOTS risk analyzer was run using the 9 projects’ COCOTS inputs. On the other
hand, the weekly risk reports of the 9 projects were used to get the actual risks
reported by the development team.
The x-axis in Figure 4.5 is the reported number of risks from the teams’ weekly
risk report; and the y-axis is the analyzed risk by applying the risk assessment
method. The trend line shows that a strong correlation exists between the team
reported risks and the analyzed project risks from the COCOTS risk analyzer tool.
80
y = 0.6749x - 2.3975R2 = 0.8948
0
5
10
15
20
25
30
35
40
45
0 10 20 30 40 50 60
Reported Risks
Ana
lyze
d R
isks
Figure 4.19 Evaluation results of USC e-services projects
4.4.2 Industry projectsTo evaluate the performance of COCOTS risk analyzer on large-scale projects, 7
large industry projects from the COCOTS model calibration database were the
expect involved in the COCOTS data collection, Dr. Elizabeth Clark. She provided
her evaluation of the relative risk for each project based on her early interviewing
notes with key project personnel during the data collection process.
The x-axis in Figure 4.6 represents the data collection author’s rating of
relative project risks evaluated from interviewing notes; and the y-axis is the
analyzed risk by applying the risk assessment method to the 7 projects’ COCOTS
cost driver ratings. Though the correlation line in Figure 4.6 is not as strong as that
in Figure 4.5, it still shows a reasonalbe trendline and R-square value of 0.6283
between the predicted risk level and reported risk level.
81
y = 45.75x + 0.6143R2 = 0.6283
0
5
10
15
20
25
30
35
40
45
50
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Reported Prob.(Risk)
Ana
lyze
d R
isk
Figure 4.20 Evaluation results of Industry projects
4.4.3 DiscussionThe evaluation results in section 4.4.1 show that the COCOTS Risk Analyzer
has done a useful job on predicting project risks in these two different situations.
The evaluation result from industry projects, as illustrated in Section 4.4.2, is less
strong. There are two reasons why this might be the case:
1 The risk records of the large industry projects are incomplete. As explained in
the experiment procedure, the reported risk probabilities of the 7 industry
projects were derived by one of the authors from her data collection
interviewing notes. And there is no actual risk data was collected, unlike the
projects in the first dataset.
2 In large industry projects, the COCOTS driver inputs are usually the aggregate
of a number of COTS products. If a project had to integrate more than one
COTS product, the ratings for some COCOTS glue code cost drivers had to be
82
averaged based on the individual COTS product’s rating. For example, if in
Figure 4.8, the projects (0.2, 18) and (0.5, 11) were having 13 and 4 COTS
products respectively. In such cases, if COTS A was a Very Low maturity
(ACPMT=Low) and COTS B had a High maturity (ACPMT=High), then the
overall project COTS maturity would probably be rated as Nominal
(ACPMT=Nominal), similarly with the other COTS product attributes such as
interface complexity (APCPX) and vendor product support (ACPPS). This
limitation is because of the lack of calibration data of COCOTS model, and
might result in inaccurate driver input when it comes to analyze project risks.
These problems are good candidates for future data validation and model
refinement.
83
Chapter 5 Optimizing CBD Process Decisions
This section introduces a risk based prioritization strategy for improving
software project decisions using the COCOTS Risk Analyzer. This method is
particularly useful in supporting many dominant decisions during COTS-based
development process, such as establishing COTS assessment criteria, scoping and
sequencing development activities, prioritizing features to be implemented in
incremental development, etc.
5.1 Project Context
In the last two years, I have had the opportunity to instruct and apply the
framework to 13 e-services projects at the University of Southern California (USC)
and observe their process execution. This environment provided a unique way of
introducing, experimenting, validating, and improving the CBD process decision
framework.
13 USC COTS-based e-service projects applied the CBD process decision
framework to plan and monitor their development process. At the same time, four
other process drivers were used help the development teams making their process
decisions with respect to the volatile project situations. These include: the CBD
Experience Base, cost estimation models (i.e. COCOMO II and COCOTS), key
stakeholders’ win-win negotiation, and COTS market watch. Each of these process
drivers plays a different role in helping developers to generate appropriate process
instances from the framework:
84
The CBD Process Decision Framework is used as a comprehensive baseline
for generating a particular COTS process.
The CBD Experience Base is a knowledge base of guidelines, patterns, and
models that is empirically formulated and used to support the decision-
making activity during CBD development.
The Constructive COTS cost model (COCOTS) is primarily used to
estimate the COTS associated efforts. And the COCOMO II model is used
to estimate the portion of custom development effort. The estimation results
will be used during the cost/benefit analysis for choosing COTS options that
produce the best life cycle expected cost-benefit.
Different stakeholders have different expectations and priorities. Explicitly
recognizing and involving them into win win negotiations will ensure all
relevant areas are better identified and addressed.
COTS market and COTS vendor are two important variation sources that
introduce the most change factors. Therefore, it is critical for the developers
to keep monitoring competitive COTS candidates by collecting and
evaluating information from the COTS marketplace.
Based on the information gathered from the 8 first year projects using the
methods, the analysis found that by applying the CBD process decision framework,
the teams performed better than those who did not. More specifically, the statistical
results show a factor of 1.28 in improving team performance and a factor of 1.6 in
increasing client satisfaction. However, it is also found that a number of decision
85
times occurred where the framework and its guidelines were not sufficient enough
in supporting developers’ decision-making. This is mainly because most developers
are computer science major graduate students who are skillful in programming but
have little or no experience on project management, especially risk management.
Moreover, the framework is able to handle changes and provide guidance on what
activity sequences the developers should follow in order to mitigate their risk, but
nothing in the framework actually addressed how/how much one can do this.
5.2 Risk based prioritization Strategy
To address this problem, the COCOTS Risk Analyzer is introduced to support
risk based prioritization, as a fundamental strategy to structure many decision
procedures within our framework to prioritize both product and process alternatives
that compete for limited resources. Table 5.1 summarizes how the risk strategy
steps, spiral quadrants, and CBD process framework steps all relate to each other:
Table 5.21 Steps of Risk Based Prioritization StrategyRisk
Strategy Step
Spiral Quadrants
CBD process Decision
Framework Step
Description
S1 Q1 P1, P2 Identify OC&Ps, COTS/other alternativesS2 Q2a P3 Evaluate COTS vs. OC&Ps (incl. COCOTS)S3 Q2a P3 Identify risks, incl. COCOTS risk analysisS4 Q2b P3 Assess risks, resolution alternatives; If risks
manageable, go to S7S5 Q2b, Q1 P6 Negotiate OC&P adjustments; If none acceptable,
drop COTS options (P7)S6 Q2a P3 If OC&P adjustments successful, go to S7; If not,
go to S5S7 Q3 P4 or P5 Execute acceptable solution
86
5.3 Optimizing process decisions
Next, we use four example scenarios to illustrate how to use this risk based
prioritization strategy in supporting decision processes.
5.3.1 Establishing COTS evaluation criteriaAs introduced in chapter 3, COTS assessment process element can be further
decomposed into steps A1-A6 as elaborated in Figure 3.7. Establishing evaluation
criteria is a major task included in step A1, where inexperienced CBD developers
often report difficulty and problematic.
Selection of COTS products is typically based on an assessment of each
product in light of certain product attributes as evaluation criteria. Inappropriate
COTS selection can cause many late rework even project failure, therefore, it is
very important to have an essential set of product attributes in the first place.
To do this, a project should follow a risk based prioritization strategy starting
with identifying an initial broad set of relevant attributes such as functionality,
security, cost, etc. (A comprehensive list of COTS product attributes is defined in
COCOTS, which can be used as a good starting point.).
In this case, major risks reflect the risk of not including certain product
attributes into the evaluation criteria, resulting in inappropriate COTS choices. To
assess the risk exposures of such risks, the COCOTS inputs include two types of
voting (Step S2 in Table 5.1). With respect to each attribute, developers will vote
its ease of evaluation; while the client will vote its importance to organization. The
voted score is on a 0-10 normalized scale, representing an increasing degree of ease
87
or degree of importance. And the risk rank value for each attribute is quantified
according to the following equation (Step S3 in Table 5.1):
Risk rank = Degree of ease of evaluation * Degree of importance to organization
Therefore, attributes with higher risk rank values reflect those that are relatively
important to the organization and relatively easy to evaluate. In general, as a risk
mitigation strategy, higher risk rank attributes should have higher prioritization to
be included to the evaluation criteria list. One thing that needs to be mentioned here
is that when conflicts exist among the votes (e.g. two extreme scores for the same
attributes from different developers), the conflicts should be resolved first through
further discussion before proceeding to finalize on evaluation criteria set.
5.3.2 Scoping COTS assessment process: An example Project teams often scope the COTS assessment process based on the COTS
products’ likely cost. A better scope criterion is to apply our risk based
prioritization strategy to calculate the amount of risk exposure involved in choosing
the wrong COTS combination. For example, a supply chain application started with
an $80,000 effort on COTS initial filtering, using separate weighted-sum
evaluations of the best-of-breed enterprise resource planning (ERP), advanced
planning scheduling (APS), transaction management (TM), and customer
relationship management (CRM) COTS package candidates based on
documentation reviews, demos, and reference checking. The team quickly
identified the best-of-breed COTS choices that seemed to be the strongest choices.
However, there was a chance that the best-of-breed COTS combination could have
88
many technical and business-model-assumption incompatibilities indicated by the
COCOTS risk analysis results. For example, among the best-of-breed COTS
combination, the COCOTS run (Step S2 and S3 in Table 5.1) for the COTS AB
combination had an APCPX (interface complexity) rating of Very High and an
ACSEW (supplier extension willingness) of Very Low, indicating a significant
integration risk.
In this case, the risk based prioritization strategy suggests a better risk
mitigation strategy that is to use the separate analyses (i.e. prototyping) to further
assess the leading choices and identify the major interoperability risk exposures,
then use the size of the overall risk exposure to scope a more detailed COTS
interoperability assessment. The analyzed results showed that this would lead to a
$3 million, eight-month overrun and associated system effectiveness shortfalls. Had
they invested more than $80,000 in COTS assessment to include interoperability
prototyping, they would have been able to switch to a more compatible COTS
combination with only minor losses in effectiveness and without expensive, late
rework. The CBD process decision framework and its assessment process element
accommodate this by two types of assessment tasks: initial filtering (A2) and
detailed assessment (A3-A5).
5.3.3 Sequencing COTS integration activitiesPatterns exist between COTS activity sequences and their indicated risks [Yang
05]. Appropriately sequencing COTS activities following a pattern can be used as a
valid means to mitigate its corresponding risk. However, it requires an overall
89
evaluation with respect to a particular project situation. As a handy tool to calculate
associated COTS integration risk, the COCOTS Risk Analyzer and risk based
prioritization are very useful in facilitating such evaluation.
In general, there are five types of risk mitigation strategies: buy information,
risk avoidance, risk transfer, risk reduction, and risk acceptance [Boehm 91]. In the
above supply chain application example, the developers were actually using
detailed COTS assessment to buy information to select a more compatible COTS
choice. Considering that stakeholders may have different value propositions, Fig.
5.1 illustrates different risk mitigation strategies they can follow in terms of
flexibly composing their COTS activity sequences, with the following stakeholder
propositions in each situation:
Risk avoidance: The team could use an adequate alternative COTS C and
follow the sequence (a) to avoid the risk of COTS A and B not talking to
each other.
Risk transfer: If the customer insists on using COTS B, the developers can
have them establish a risk reserve to be used to the extend that A and B
cannot talk to each other and follow the sequence (b).
Risk reduction: If the customer decides to build the wrappers to get A and B
to talk through CORBA corrections, the development cost will increase but
the schedule delay will be minimized;
Risk acceptance: If the developer prefers to solve the A and B
interoperability problem, they will have a big competitive edge on the
90
future procurements. “Let’s do this on our own money, and patent the
solution”.
To illustrate these in our risk strategy, for example, if in steps S2 and S3, the
COCOTS run shows the COTS AC combination had an ACPER (performance
adequacy) rating of Low and an ACSEW rating of Very Low, also indicating a
fairly significant risk. Then in Step S4, the developers prototyped the AB interface
complexity state of nature and found that it incurred a significant added cost and
schedule. Step S5 involves the developers and the customer evaluating the 4 risk
resolution alternatives and determining an adjustment of the OC&P’s that leads to
an acceptable COTS solution in Step S6.
Choose COTS C
Integrate COTS A, C
Develop Application Deliver(a) Risk Avoidance:
COTS C adequate
Choose COTS B
Develop Application,
Integrate A & B
Develop Application
Deliver(b) Risk Transfer:
COTS C not adequate
OK
Use risk reserve to fix problemProblem
Choose COTS B
Develop parts of application, use
wrappers to integrate A and B
Develop rest of application Deliver
(c) Risk Reduction: Custom $, IP
(d) Risk Reduction: Custom $, IP Package
wrappers for future use
Figure 5.21 Different risk patterns result in different activity sequences
91
5.3.4 Prioritization of top features: Another exampleStakeholders win win negotiation plays an important role in converging on a
mutually satisfactory set of product features to be implemented within 24 weeks
schedule. In this case, the COCOTS Risk Analyzer provides risk analysis that can
be used in risk based prioritization for prioritizing the project requirements and
converging on a feasible feature set.
This is illustrated in an example USC e-services project, “Open source
discussion board and research platform”. During the win win negotiation,
stakeholders agreed that the full operational capability includes three top-level
features: discussion board, instant messenger, and other user management features
supporting Internet Explorer, Mozilla, and Netscape Navigator web browsers.
However, the development team only included 6 people and was under a strict
schedule limit of 24 weeks. Using the COCOTS Risk Analyzer, the project
followed the risk based prioritization strategy to find out that including support for
Mozilla and Netscape Navigator web browsers would cause 6 weeks schedule
overrun, and including the instant messenger feature would cause 4 weeks schedule
delay.
Therefore, the stakeholders came into agreement to leave the feature of
supporting other web browsers and instant messenger to the evolution
requirements, which would be implemented in a later increment but still be used in
determining the system architecture in order to facilitate evolution to full
operational capability.
92
5.4 Discussion
In 2005-2006, the risk based prioritization principle was experimented within the
context of CBD framework on 5 projects, and an end-of-project survey was given
to all developers to collect the usage data and feedback. Out of the total 24
responses, 19 commented that the framework is useful in preparing life cycle plan,
and 21 reported that the risk based prioritization principle helped in their risk
analysis. These are encouraging indicators in evaluating the performance of the
improved framework.
93
Chapter 6 Validation Results
The evolving CBA Process Decision Framework has been taught and
experimentally applied in USC e-services CBA projects since year 2004. The
experimental results indicate that its appropriate use has resulted in more successful
e-services CBA projects. This section will present the primary empirical results
from Fall 2004 and Fall 2005.
6.1 Results from Fall 2004
In the Fall 2004 semester, 14 out of the 17 USC e-services projects were of the
CBA type of development according to the CBA definition presented in Section
3.1. An experiment was performed on the 14 CBA projects. The experimental
treatment involved giving all of the projects an approximately 2 hours tutorial on
the definition and application of the CBA composable processes -- the CBA PDF
and its three CPEs -- and then letting each project decide whether or not to use the
composable processes. This treatment involves a potential threat to validity of
having more issue-aware teams choosing to adopt the processes. But it avoids
potentially larger threats to validity involving random assignment of the processes
to teams that then choose not to apply them appropriately.
The first group A consists of 8 projects that applied the composable processes;
the second group B includes 6 projects that did not apply. This section discusses
two categories of quantitative results obtained from the Fall 2004 experiment.
94
6.1.1 Team PerformanceTo measure and compare the team performance, data on defects within team
deliverables were collected and analyzed. The defect measuremeant is derived from
the overall number of defects within the LCO and LCA package. Table 6.1 shows
the comparison of defects between group A and group B.
Table 6.22 Comparison of 2004 groups A and B on number of defects Group A Group B
Team No. Defect Team No. Defect
1 41 2 57.8
6 31 3 76
7 48 4 118
8 37 10 85
14 33 11 72
18 49 12 86.7
21 50
24 30
Average =39.8 Average = 82.6
A paired t-test was performed on the two group performance data, and the result
shows that very significant differences exist between the two groups. The two-
tailed P value equals 0.0029. By conventional criteria, this difference is considered
to be very statistically significant.
6.1.2 Client SatisfactionA client satisfaction survey was performed at the end of the project. It consists
of questions to measure five team aspects including committed, representative,
authorized, collaborative, and domain knowledge. The total client satisfaction is
equal to a number of 20 points. As indicated by the numbers in Table 6.2, the
95
average client satisfaction points seemed not improved much. The paired t-test
results also suggest a not significant difference.
However, this could be explained in a number of factors that need further
refinements on data collection template and data collection procedure, including
both more COTS-specific client evaluation questions and more controlled data
collection. Many of the current client evaluation are independent of COTS
considerations, such as team-client responsiveness to clients, and client learning.
Note there were three teams’s data is missing in Table 6.2.
Table 6.23 Comparison of 2004 groups A and B on client satisfactionGroup A Group B
Team No. Client Satisfaction Team No. Client Satisfaction
1 15.5 2 20
6 19.75 3 20
7 20 4 19.5
8 19 10 13.7
14 Not answered 11 15
18 Not answered 12 Not answered
21 18.25
24 19
Average =18.6 Average =17.6
6.1.3 Other ResultsA survey performed on these projects to collect the usage data from the CBA
developers. The results of the survey show that different COTS impact profiles
exist between projects from the group A and projects from group B. Figure 6.1
compares the different aspects of COTS impacts in terms of the magnitude of the
impact on the two groups. In average, the group A projects exhibit less schedule
96
delay, less difficult integration issues, and slightly less changes in requirements and
architecture design, they also reported the use of COTS helps to save development
effort and produce higher quality system. Note that the number of responses from
group A is 44, and that from group B is 26. A copy of the survey is attached in this
thesis.
0
5
10
15
20
25
scheduledelay
hard to beintegrated
architecturechanges
requirementchanges
save dev.effort
higherquality
Aspects of COTS Impacts
Mag
nitu
de o
f CO
TS Im
pact
s
Group A Group B
Figure 6.22 Comparison of COTS Impacts in two groupsAmong the responses from group A, the developers also identified various
advantages from applying the framework. This is summarized in the Table 6.3,
where the last column shows the percentage of responses for each listed advantage.
Table 6.24 Reported Benefits from the Usage of the Framework
Framework can help with: Percentage
COTS assessment 81.8%
Risk identification and mitigaiton 68.2%
Life cycle planning 63.6%
97
6.2 Results from Fall 2005
As described in Chapter 5, the COCOTS Risk Analyzer was developed and
applied in Fall 2005 in conjunction with the CBD Process Decision Framework to
facilitate risk-based decision making. In Fall 2005, 11 out of the 20 USC e-services
projects were CBA type of development according to the CBA definition presented
in Section 3.1, which is a fraction of 55%. The list of projects are summarized in
Table 6.4.
Table 6.25 USC e-services non-CBA projects in Fall 2005Project ID Project Name
1 Open Source Discussion Board and Research Platform - I
2 Physics Education Research (PER)
3 Pasadena High School Computer Network Study
4 Virtual Football Trainer System
5 Data Mining PubMed Results
6 USC Football Recruiting Database
7 USC Equipment Inventory Database
8 Data Mining of Digital Library Usage Data
9 J2EE-based Session Management Framework Extension
10 Code Generator – Template based
11 dotProject and Mantis integration
12 Mule as a highly scalable distributed framework for data migration
13 Develop a Web Based XML Editing Tool
14 CodeCount™ Product Line with XML and C++ - I
15 COSYSMO Extension to COINCOMO
16 Intelligent, 'diff'ing CodeCount Superstructure
19 EBay Notification System
21 Rule-based Editor
22 Open Source Discussion Board and Research Platform - II
23 CodeCount™ Product Line with XML and C++- II
98
In Fall 2005, the experiment was designed and carried out as shown in Figure
6.2. Five process drivers were used to help the development teams making their
process decisions with respect to the changing project situations. These include: the
Composable Process Framework, the COCOTS Risk Analyzer, cost estimation
models (i.e. COCOMO II and COCOTS), key stakeholders’ win-win negotiation,
and COTS market watch. Each of these process drivers plays a different role in
helping developers to generate appropriate process instances from the framework:
Figure 6.23 Fall 2005 Experiment Setting
The Composable CBA Process Decision Framework is used as a
comprehensive baseline for generating a particular COTS process.
Clients,Graders,IV&Vers
Product,Artifacts
Defects, Client Satisfaction Scores
Team decisions on use of
COCOTS Risk Analyzer
Optional use ofComposable
CBA Process Framework
Process Elements
COTSMarket/Vendors
Win conditions,system OC&P Agreements
Information, updates, changes
2. During the semester:* Individual: weekly effort
reporting* Team: weekly progress
report
1. Beginning of semester:* COTS lecture;* COTS Experience Survey;
3. End of semester:* COCOTS report* Feedback survey
COCOTS Risk Analyzer
Optional use ofComposable
CBA Process Framework
Process Elements
COCOMO II, COCOTS
COTSMarket/
Vendors
Cost factors
Cost exposure
Win conditions,system OC&P Agreements
Information, updates, changes
2. During the semester:* Individual: weekly effort
reporting* Team: weekly progress
report
1. Beginning of semester:* COTS lecture;* COTS Experience Survey;
3. End of semester:* COCOTS report* Feedback survey
CBA framework
Risk Level
Clients,Graders,IV&Vers
Product,Artifacts
Defects, Client Satisfaction Scores
Team decisions on use of
COCOTS Risk Analyzer
Optional use ofComposable
CBA Process Framework
Process Elements
COTSMarket/Vendors
Win conditions,system OC&P Agreements
Information, updates, changes
2. During the semester:* Individual: weekly effort
reporting* Team: weekly progress
report
1. Beginning of semester:* COTS lecture;* COTS Experience Survey;
3. End of semester:* COCOTS report* Feedback survey
COCOTS Risk Analyzer
Optional use ofComposable
CBA Process Framework
Process Elements
COCOMO II, COCOTS
COTSMarket/
Vendors
Cost factors
Cost exposure
Win conditions,system OC&P Agreements
Information, updates, changes
2. During the semester:* Individual: weekly effort
reporting* Team: weekly progress
report
1. Beginning of semester:* COTS lecture;* COTS Experience Survey;
3. End of semester:* COCOTS report* Feedback survey
CBA framework
Risk Level
99
The CBA Experience Base is a knowledge base of guidelines, patterns, and
models that is empirically formulated and used to support the decision-
making activity during CBA development.
The Constructive COTS cost model (COCOTS) [10] is primarily used to
estimate the COTS associated efforts. And the COCOMO II model is used
to estimate the portion of custom development effort. The estimation results
will be used during the cost/benefit analysis for choosing COTS options that
produce the best life cycle expected cost-benefit.
Different stakeholders have different expectations and priorities. Explicitly
recognizing and involving them into win win negotiations will ensure all
relevant areas are better identified and addressed.
COTS market and COTS vendor are two important variation sources that
introduce the most change factors. Therefore, it is critical for the
developers to keep monitoring competitive COTS candidates by collecting
and evaluating information from COTS market/vendor.
Finally, the clients, course graders, and IV&V people will provide
evaluation data used to compare both groups.
The first group A consists of 5 projects that chose to apply the Composable
processes; the second group B includes 6 projects that did not choose to apply the
Composable processes. Both groups were given a 2 hour lecture on the selected
process according to their choices. There was also a COTS experience survey on all
100
team members, which has shown that the developers in both group were at the
similar CBD experience level. All projects were scoped at a similar complexity
level at well so that they were comparable to one another.
The developers were asked to submit individual effort and team progress report on a
weekly basis. The clients, class grader, IV&V personnel evaluated team artifacts on each
of the LCO, LCA, IOC milestones.
This section discusses results from the Fall 2005 experiment.
6.2.1 Pre-experiment survey resultsAt the beginning of Fall 2005 semester, an experience survey was performed on
individuals to collect their personnel experience on COTS-based development. 89
responses were received. The results from the survey indicate that:
1. About 44% of the developers have been in software development for longer
than 1 year; another 35% with no previous experience or less than 6
months;
2. About 1/3 of the developers have experience in COTS-based development;
3. Only 1/5 of the developers have knowledge about any COTS-based
development process;
4. The developer’s familiarity with different COTS-related activities are
illustrated in Figure 6.3;
101
Figure 6.24 Experience with COTS activities
6.2.2 Experiment results
Project COTS/risk profileTable 6.5. compares the average project profile information in the two groups
in number of COTS products, number of capability requirements, number of
requirements covered by COTS, and number of reported COTS risks. Also shown
in Table 2 is a third group which is not COTS intensive projects.
Table 6.26 Comparison of 2005 group A and B project profilesType # of
COTS# of reqt’s
# of reqt’s by COTS
# of reported COTS risks
Total Effort (Hrs)
Effort on Risk Mgmt. (Hrs)
Group A (Teams applied COTS process framework)
4.8 7 5 49 1852 73
Group B (Teams applied MBASE)
2.5 11 6 27 1666 40
Group C (Non-CBD projects) 1 8 2 15 1314 36
102
The data in Table 2 shows that the group A projects have more COTS products
to deal with. Their comparatively fewer requirements indicates that the
requirements are counted at a higher granularity level than the other groups which
had more custom component. And the fraction of COTS-provided requirements is
the highest in group A, which is consistent with our CBA definition in [Boehm
2003]. The 49 reported COTS risks in Group A also indicate that with applying
our CBD composable processes and COCOTS Risk Analyzer, the teams were able
to identify more COTS-oriented risks. The data also shows the non-COTS-
intensive projects have considerably fewer COTS delivered requirements and
COTS risks.
Table 6.27 Evaluation of COCOTS Risk AnalyzerTeam Number # of Predicted Risks # of Reported
COTS Risks # of same
risksNotes
2 2 1 1
3 12 6 5 COTS cost risk
4 15 7 7
8 1 2 1 Tailoring risk
12 1 1 1
Table 6.6 shows more elaborated risk data from Group A projects. It clearly
shows that COCOTS Risk Analyzer was able to help Team 2, 4, and 12 to identify
their COTS risks that matched their project reality. For Team 3 and 8, COCOTS
Risk Analyzer failed to predict some risks related to COTS ownership cost and
tailoring issues. This is because the current COCOTS Risk Analyzer is mainly
focused on integration risks based on COCOTS Glue-code sub-model, and
103
extending it to cover COCOTS Assessment and Tailoring sub-models is included
as one of the next steps in our study.
Team performanceThe defect measurement is derived from the overall number of defects within
the LCO and LCA package. Table 6.7 shows the comparison of total defects and
COTS-related defects between group A and group B.
Table 6.28 Comparison of 2005 groups A and B on number of defects Group A Group B
Team No. # Defects # COTS Defects Team No. # Defects # COTS Defects2 80 20 1 108 543 79 37 7 146 754 98 25 10 105 828 85 23 11 99 5412 105 13 16 132 63 23 91 60
Average 89 24 Average 114 65A paired t-test was performed on the two group performance data, and the result
shows that significant differences exist between the two groups at the confidence
level of 95%.
Client satisfactionTable 6.29 Comparison of 2005 groups A and B on client satisfaction
Group A Group B
Team No. Client Satisfaction Team No. Client Satisfaction
2 17 1 10
3 20 7 17
4 20 20 20
8 20 11 20
12 17 16 17
23 17
Average =18.8 Average =16.8
Average without team 1
=18.2
104
As indicated by the numbers in Table 6.8, the average client satisfaction
measure seemed not improved much as in Fall 2004. The paired t-test results
indicate a significant difference, but the difference is mostly due to the potentially-
outlier project by team 1.
6.2.3 Post-experiment survey resultsAn end of semester usage survey was also carried out on group A to gather the
feedback on using Composable processes. Figure 6.4 shows the distribution of
COTS activities in all 5 teams.
Figure 6.25 COTS activities performed in 2005
105
Figure 6.26 Activity improved in 2005
Figure 6.5 shows the responses to the improvement in performing those
activities by following the Composable process guidelines. Note that the number of
responses is 24. The data shows that the users of the Composable Processes
generally believe that they can strongly improve the practice of life cycle planning,
risk management, and COTS assessment. However, it also indicates that the
Composable Processes may not provide more efficient support for glue code
development activity.
6.3 Threats to Validity
The empirical validation was limited by the availability and quality of empirical
data, especially with respect to distinguishing COTS-related efforts from general-
project efforts. Although the CBA projects for the case studies were taken from a
graduate-level university class, not from industry, the projects produced real usable
106
deliverables for real clients, and were staffed by teams with almost half having
over 1 year of industry experience, and about 2/3 having at least 6 months of
industry experience.
As discussed above, the primary threat to validity was to allow teams to self-
select for use of the CBA framework and processes. This confounds somewhat the
effects of the CBA framework with the effects of team interest in adopting new
processes. But the alternative of randomly assigning teams to use the CBA
framework involves confounding effects due to assigned teams not appropriately
applying the framework.
107
Chapter 7 Contribution and Future Work
In this chapter we summarize the main contribution of our Composable
Processes work and present some insight on future areas of study.
7.1 Contributions
The following list summarizes the main contributions of this thesis:
1. This thesis provided a 6-year longitudinal analysis of small e-services
application to show that the fractions of CBA projects increased from 28% in
1997 to 70% in 2002.
2. This thesis analyzed that though there is no one-size-fits-all process, there are
some common activity patterns and significant correlations (e.g., a -.92 negative
correlation between amount of Tailoring effort and Glue code effort).
3. This thesis provided Composable processes with a CBD Process Decision
Framework as a process model generator, enabling projects to generate
appropriate combinations of A, T, G, and C process elements that best fit their
project situation and dynamics.
4. This thesis presented a heuristic risk assessment approach and tool
implementation, the COCOTS Risk Analyzer, enabling more effective use of
COCOTS to not only estimate CBA development cost and effort, but also
improve risk management practice by partly automating risk analysis and
providing mitigation advice.
5. This thesis experimented with the use of the CBA framework and risk-based
process prioritization with the support of the COCOTS Risk Analyzer. 108
Although the data and phenomena cannot clearly separate out COTS effects, the
results indicated the improvements the practice of life cycle planning, risk
management, and COTS assessment.
7.2 Areas of Future Work
Our work has been focusing on helping CBA developers to generate flexible
process instances with respect to their particular project situations, providing a
value-based set of COTS-specific process elements and risk-based prioritization
decision strategy. Several attractive suggested directions for future research
include:
Extending the COCOTS risk analyzer to cover the other COTS-related
activities such as COTS assessment and tailoring to provide more effective
risk analysis and mitigation advices of CBD.
Researching on the representation of the concurrency and backtracking
aspects of CBA development to refine the composable process framework;
Assessing the validity of the process framework in other CBA sectors
outside of USC scope and refining the Composable Processes definition
and guidelines;
Identifying common process element configurations, valid and invalid
configurations, or large-grain CBA process patterns to better support
process decision-making.
109
References
[Abd-Allah 96] Abd-Allah, A.: Ph.D. Dissertation, “Composing Heterogeneous Software Architectures”, USC, 1996. URL: http://sunset.usc.edu/publications/dissertations/aaadef.pdf.
[Abts 01] C. Abts, “Extending the COCOMO II Software Cost Model to Estimate Effort and Schedule for Software Systems Using Commercial-Off-The-Shelf (COTS) Software Components: the COCOTS Model,” Ph.D. Dissertation, Oct. 2001.
[Albert 00] Albert, C. and E. Morris, "Commercial Item Acquisition: Considerations and Lessons Learned", 2000. http://www.dsp.dla.mil/documents/cotsreport.pdf
[Albert 02] Albert, C., and Brownsword, L.: “Evolutionary Process for Integrating COTS-Based Systems (EPIC): An Overview,” Technical Report, CMU-SEI-2002-TR-009, July 2002.
[Auerbach 80] Auerbach Computer Technology Reports, Auerbach Publishers, Inc., 6560 No. Park Dr., Pennsauken, NJ 08109, periodical.
[Basili 91] Basili, V., Rombach, H., “Support for comprehensive reuse”,Software Engineering Journal, September 1991, pp. 303-316.
[Basili 96] Basili, V.: “The Role of Experimentation in Software Engineering: Past, Current, and Future”, Proceedings of the 18th International Conference on Software Engineering, March 25-29, 1996, pp 442-449.
[Basili 01] Basili, V., Boehm, B.: “COTS Based System Top 10 List,” Computer, Berlin, Germany, May 2001, pp 91-93.
[Brownsword 00] Brownsword, L.; Oberndorf, T.; Sledge, C. A. Developing New Processes for COTS-Based Systems. IEEE Software, July/August 2000.
[Balk 00] Balk, L.D., Kedia, A.: “PPT: a COTS integration case study”, Proceedings of the 2000 International Conference on Software Engineering, June 2000 pp.42 – 49.
[Balzer 02] Balzer, R., “Living with COTS,” Proceedings, ICSE 24, May 2002, pp. 5.
110
[Benguria 02] G. Benguria, A. Garcia, D. Sellier, and S. Tay, “European COTS Working Group: Analysis of the Common Problems and Current Practices of the European COTS Users,” COTS-Based Software Systems (Proceedings, ICCBSS 2002), Springer Verlag, 2002, J. Dean and A. Gravel (eds.), pp. 44-53.
[Bhuta 05] Bhuta, J., Boehm, B.: “A Method for Compatible COTS Component Selection”. Proceedings of COTS-Based Software Systems, 4th International Conference, ICCBSS 2005, pp.132-143. Bilbao, Spain, February 7-11, 2005, Springer 2005.
[Boehm 81] Boehm, B.: Software Engineering Economics, Prentice-Hall, Inc., Englewood Cliffs, NK 07632, 1981.
[Boehm 88] Boehm, B.: “A Spiral Model of Software Development and Enhancement,” Computer, May 1988, pp. 61-72.
[Boehm 96] Boehm, B.: “Anchoring the Software Process,” Software, July 1996, pp. 73-82.
[Boehm 97] Boehm, B., and Wolf, S. “An Open Architecture for Software Process Asset Reuse”. Proceedings of the 10th International Software Process Workshop. June, 1996. Ventron, France, pp. 54-56.
[Boehm 99] Boehm, B., and Abts, C.: “COTS Integration: Plug and Pray?”, Software, January 1999, pp. 135-138.
[Boehm 99b] Boehm, B., Port, D., Abi-Antoun, M., and Egyed, A., "Guidelines for the Life Cycle Objectives (LCO) and the Life Cycle Architecture (LCA) deliverables for Model-Based Architecting and Software Engineering (MBASE)," USC Technical Report USC-CSE-98-519, University of Southern California, Los Angeles, CA, 90089, February 1999.
[Boehm 00] Barry Boehm, Chris Abts, A.Winsor Brown, Sunita Chulani, Bradford K. Clark, Ellis Horowitz, Ray Madachy, Donald Reifer and Bert Steece. Software Cost Estimation with COCOMO II. Prentice Hall PTR, July 2000.
[Boehm 01] Boehm, B., D. Port, M. Abi-Antoun, and A. Egyed, eds. Guidelines for Model-Based Architecting and Software Engineering (MBASE). Ver. 2.2. USC-CSE, Feb. 2001, http://sunset.usc.edu/Research/MBASE.
[Boehm 02] B. Boehm, D. Port, A. Jain, and V. Basili, "Achieving CMMI Level 5 Improvements with MBASE and the CeBASE Method," Crosstalk, vol. May, no. pp. 9-16, 2002.
111
[Boehm 03] B. Boehm, D. Port, Y. Yang, and J. Buhta, “Composable Process Elements for Developing COTS-Based Applications,” Proceedings, ISESE 2003, Sep. 2003.
[Boehm 03b] Boehm, B.: “Value-Based Software Engineering”, ACM Software Engineering Notes, March 2003.
[Boehm 03c] Boehm, B., Port, D., Yang, Y., Bhuta, J. : “Not All CBS Are Created Equally: COTS-Intensive Project Types,” Proceedings of COTS-Based Software Systems, 2nd International Conference, ICCBSS 2003. Ottawa, Canada, Feb. 2003, pp. 36-50. Springer 2003.
[Boland 97] Boland, D., Coon, R., Byers, K., Levitt, D., "Calibration of a COTS Integration Model Using Local Project Data", The Twenty-second SoftwareEngineering Workshop, NASA/Goddard Space Flight Center Software EngineeringLaboratory (SEL), Greenbelt, MD, December 1997, pp. 81-98.
[Braun 99] Christine L. Braun, A Life Cycle Process for Effective Reuse of Commercial Off-the-Shelf (COTS) Software, Proceedings of the fifth symposium on Software reusability 1999, Los Angeles, California, United States.
[Brooks, 86] Brooks, F. No Silver Bullet. In Proc. 10th IFIP World Computing Conference, pages 1069--1076, 1986.
[Brooks 04] Brooks, L., “Costing COTS Integration”, Proceedings of 3. rd. International Conference on. COTS-Based Software Systems, ICCBSS2004. February 2004 Redondo Beach, CA USA. Springer 2004.
[Brownsword 98] Brownsword, L., Carney, D., Oberndorf, T., "The Opportunities and Complexities of Applying Commercial-Off-the-Shelf Components", CrossTalk, April 1998.
[Carney 97] David Carney. Assembling large systems from COTS components: Opportunities, cautions, and complexities. In SEI Monographs on the Use of Commercial Software in Government Systems. Software Engineering Institute, Carnegie Mellon University, June 1997.
[Carney 98] Carney, D. and Oberndorf, P. Commandments of COTS: Still in Search of the Promised Land. Technical Report, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA. (1998).
[Carney 03] Carney, D., Morris, E., and Place, P.: Identifying Commercial Off-the-Shelf (COTS) Product Risks: The COTS Usage Risk Evaluation. September 2003. TECHNICAL REPORT. CMU/SEI-2003-TR-023.
112
[Carr 04] D. Carr, "Keynote Address: Integration Successes and Flops: The Baseline Magazine Perspective," Proceedings of 3. rd. International Conference on. COTS-Based Software Systems, ICCBSS2004. February 2004 Redondo Beach, CA USA. Springer 2004.
[Chaos 00] Standish Group, “Extreme Chaos,” http://www.standishgroup.com, 2001.
[Chrissis 03] M.B. Chrissis, M. Konrad, S. Shrum, CMMI: Guidelines for Process Integration and Product Improvement, Addison-Wesley, Feb. 2003.
[Chung 04] Chung, L., and Cooper, K. “Defining Goals in a COTS-Aware Requirements Engineering Approach”, Systems Engineering, vol. 7, No. 1, 2004.
[Datapro 80] Datapro Directory of Software, Datapro Research Corp., 1805 Underwood Blvd., Delran, NJ 08075, periodical.
[Dean 00] John Dean, Patricia Oberndorf, Mark Vigder, Chris Abts, Hakan Erdogmus, Neil Maiden, Neil Maiden, Michel Looney, George Heineman, Michel Guntersdorfer, “COTS Workshop: Continuing Collaborations for Successful COTS Development”, Proceedings of ICSE 2000.
[Dashofy 99] Dashofy, E., Medvidovic, N., Taylor, R.N., “Using Off-The-Shelf Middleware to Implement Connectors in Distributed Software Architectures”,Proceedings of the 21st International Conference on Software Engineering, LosAngeles, USA, 1999, pp. 3 – 12.
[Dong 99] Dong, J., Alencar, P.S.C., Cowan, D.D., “A ComponentSpecification Template for COTS-based Software Development”, Proceedings of the First Workshop on Ensuring Successful COTS Development, Los Angeles, USA, 1999. [Dorda 03] Comella- Dorda, S., Dean, J., Morris, E., and Oberndorf, P.: “A Process for COTS Software Product Evaluation,” Proceedings, ICCBSS 2003, Feb. 2002, pp. 86-96. Springer 2003.
[Ellis 95] Ellis, Tim (1995). COTS Integration in Software Solutions - A Cost Model, Proceedings, "System Engineering in Global Marketplace," NCOSE International Symposium, July 24-26, 1995, St. Louis, MO.
[Erdogmus 99] Erdogmus, H. (1999). Quantitative approaches for assessing the value of COTS-centric development. In Proceedings of the Sixth International Symposium on Software Metrics (Fort Lauderdale, FL, November 1999.
113
[Erdogmus 00] H. Erdogmus, C.A. Sledge, M. Looney and, S. Allison. Costing of COTS-Based Systems: An Initial Framework. Position Paper. Proc. ICSE ’2000 Workshop on Continuing Collaborations for Successful COTS Development, June 4-5, 2000, Limerick, Ireland.
[FAR 96] Federal Acquisition Regulations. Washington, DC: General Services Administration, 1996.
[Fox 97] Fox, G., Lantner, K., and Marcom, S.: “A Software Development Process for COTS-Based Information System Infrastructure”, The 22nd Software Engineering Workshop, NASA/Goddard Space Flight Center Software Engineering Laboratory (SEL), Greenbelt, MD, Dec. 1997, pp. 133-153.
[Fox 00] Fox, Steve, M. Moore, "EOSDIS Core System (ECS) COTS Lessons Learned", 25th Annual NASA Goddard Software Engineering Workshop, November 2000.
[Gacek 97] Gacek, C., “Detecting Architectural Mismatches During Systems Composition”, PhD Dissertation, USC, December 1998.
[Gentleman 97] Gentleman, W.M., :” Effective Use of COTS Software Components in Long Live Systems”, Proceedings, ICSE 1997, Boston, USA, 1997, pp. 635-636.
[IBM 77] IBM Corp., “Auditability and Productivity Information Catalog: System 370”, IBM, September 1977.
[ISO 99] ISO/IEC 14598-4: 1999, Software engineering -- Product evaluation -- Part 4: Process for acquirers.
[Jacobson etc. 99]Jacobson, I., Booch, G., and Rumbaugh, J., The Unified Software Development Process. Addison Wesley Longman, Reading, MA, 1999.
[Jeffery 99] Jeffery, D.R.; Votta, L.G.; “Empirical Software Engineering”, IEEE Transactions on Software Engineering, Volume 25, Issue 4, July-Aug. 1999. pp. 435 – 437.
[Kansala 97] Känsälä, K., “Integrating Risk Assessment with Cost Estimation”, IEEE Software, vol. 14, no. 3, pp. 61-67, May/June 1997.
[Kontio 96] Jyrki Kontio, A Case Study in Applying a Systematic Method for COTS Selection, Proceedings of the 18th international conference on Software engineering May 1996, Berlin, Germany.
114
[Kruchten 99] P. Kruchten, The Rational Unified Process, Addison Wesley, 1999.
[Kunda 99] Kunda, D., Brooks, L. Applying Social-Technical Approach for COTS Selection.Proceedings of the 4th UKAIS Conference, University of York, April 1999.
[Lauriere 04] S. Lauriere and J. Mielnik, "The eCots Director," ICCBSS 2004 Workshop on COTS Terminology and Categories, Redondo Beach, CA, USA, Feb. 9-12, 2004.
[Lawlis 01] Lawlis, P., Mark, K., Thomas, D., and Courtheyn, T.: “A Formal Process for Evaluating COTS Software Products”, Computer, 34(5), 2001, pp. 58-63.
[Lewis 00] P. Lewis, P. Hyle, M. Parrington, E. Clark, B. Boehm, A. Abts, R. Manners. Lessons Learned in Developing Commercial Off-The-Shelf (COTS) Intensive Software Systems, Federal Aviation Administration Software Engineering Resource Center October 2, 2000.
[Lewis 01] Lewis, Patrick, Hyle P., Parrington M., Clark E., Boehm B., Abts C., Manners R., Brackett J., "Lessons Learned in Developing Commercial Off-the-Shelf (COTS) Intensive Software Systems", FAA SERC Report, 2001.
[Madachy 97] Madachy R., “Heuristic Risk Assessment Using Cost Factors”, IEEE Software, May/June 1997.
[Maiden 99] Maiden, N. A. M, Ncube, C.,: “PORE: Procurement-Oriented Requirements Engineering Method for the Component-Based Systems Engineering Development Paradigm”, International Workshop on Component-Based Software Engineering, May 1999.
[Maxey 03] Burke Maxey: COTS Integration in Safety Critical Systems Using RTCA/DO-178B Guidelines. ICCBSS 2003: pp.134-142.
[Medvidovic 97]Medvidovic, N., Oreizy, P., Taylor, R.N., Reuse of Off-the-Shelf Components in C2-Style Architectures, Proceedings of the 1997 Symposium on Software Reusability (SSR'97), Boston, USA, May, 1997, pp. 190-198.
[Mehta 00] Mehta, N. R., Medvidovic, N., and Phadke, S.: “Towards a Taxonomy of Software Connectors.” Proceedings, ICSE 22, Jun. 2000, pp. 178-187.
[MITRE 03] Use of Free and Open-Source Software (FOSS) in the U.S. Department of Defense, MITRE report: MP 02 W0000101, Jan., 2003.
115
[Morera 04] D. Morera, "Tutorial: Using Commercial Components to Build Software Systems," ICCBSS 2004, Redondo Beach, CA, USA, Feb. 9-12, 2004.
[Morisio 00] M. Morisio, C. Seaman, A. Parra, V. Basili, S. Kraft, and S. Condon, “Investigating and Improving a COTS-Based Software Development Process,” Proceedings, ICSE 22, June 2000, pp. 32-41.
[Morisio 02] Morisio, M., and Torchiano, M., “Definition and Classification of COTS: a Proposal”, proceedings, ICCBSS 2002, Feb., 2002, FL, USA.
[NASA 80] “Computer Program Abstracts”, NASA-COSMIC (Univ. of Georgia), Athens, GA 30602, periodical.
[NTIS 80] National Technical Information Service (NTIS), A Directory of Computer Software and Related Technical Reports 1980, U.S. Department of Commerce, 5285 Port Royal Rd., Springfield, VA 22161, 1980.
[Oberndorf 97] Oberndorf, P. Facilitating Component-Based Software Engineering: COTS and Open Systems. Proceedings of the Fifth International Symposium on Assessment of Software Tools - SAST’97. June 1997.
[Osterweil 87] Leon J. Osterweil. Software Processes are Software, Too. Proceedings of the Ninth International Conference on Software Engineering, pp. 2-13, Monterey, CA, March 1987.
[Port 04] D. Port, Y. Yang, “Empirical Analysis of COTS Activity,” Proceedings, ICCBSS'04. [Port 04b] Port, D., and Chen, Z.H.: “Assessing COTS Assessment: How Much Is Enough?”, Proceedings, ICCBSS 2004, Feb. 2004, pp. 183-198.
[PRICE 04] Minkiewicz, A. The Real Costs of Developing COTS Software. http://www.pricesystems.com/downloads/pdf/COTSwhitepaper3-31-04.pdf
[Rashid 01] Rashid, A. and G. Kotonya (2001) Risk Management in Component-Based Development: A Separation of Concerns Perspective. ECOOP Workshop on Advanced Separation of Concerns (ECOOP Workshop Reader). Springer-Verlag Lecture Notes in Computer Science.
[Reifer 01] D. J. Reifer, Making the Software Business Case. Addison-Wesley, September 2001.
116
[Reifer 03] Donald J. Reifer, Barry W. Boehm, Murali Gangadharan: Estimating the Cost of Security for COTS Software. ICCBSS 2003: 178-186.
[Reifer 04] D. Reifer et al., “COTS-Based Systems: Twelve Lessons Learned,” Proc. 3rd Int’l Conf. COTS-Based Software Systems (ICCBSS 04), LNCS 2959, Springer-Verlag, 2004, pp. 137–145.
[Royce 70] Royce, W. W. Managing the Development of Large Software Systems: Concepts and Techniques. Proceedings of IEEE WESCON, pp. 1-9. 1970.
[SAB 00] United States Air Force Science Advisory Board Report on Ensuring Successful Implementation of Commercial Items in Air Force Systems. SAB-TR-99-03, 2000.
[Sledge 98] Sledge, C., and Carney, D., : Case Study: Evaluating COTS Products for DoD Information Systems. SEI Monographs on the Use of Commercial Software in Government Systems. July 1998.
[Smith 97] Smith, R., Parrish, A., Hale, J., "Component Based SoftwareDevelopment: Parameters Influencing Cost Estimation", Twenty-second SoftwareEngineering Workshop, NASA/Goddard Space Flight Center Software EngineeringLaboratory (SEL), Greenbelt, MD, December 1997, pp. 283-301.
[Sommerville 00] Sommerville, I. Software Engineering, 6th Edition. Addison Wesley, 2000.
[Swanson 97] Swanson, B.D., MacMagnus, J.G., "C++ ComponentIntegration Obstacles", Crosstalk, May 1997, Vol.10, No.5, pp. 22-24.
[Tyson 03] Tyson, B., Albert, C., and Brownsword, L.: Implications of Using the Capability Maturity Model Integration (CMMI) for COTS-Based Systems., ICCBSS 2003.
[Vigder 98] Vigder, M., and Dean, J.: Building Maintainable COTS-Based Systems. In: International Conference on Software Maintenance. Washington DC (1998). [Voas 98] Voas, J.M., “The Challenges of Using COTS Software in Component-Based Development”, Computer, June 1998, pp. 44-45.
[Wallnau 98] Wallnau, K. C., Carney, D., Pollak, B., “How COTS Software Affects the Design of COTS-Intensive Systems,” http://interactive.sei.cmu.edu/Features/1998/june/cots_software/Cots_Software.htm
117
[Whatis] http://searchcrm.techtarget.com/sDefinition/0,,sid11_gci789218,00.html
[Yakimovich 99] Yakimovich, D., Travassos, G.H., Basili, V.R., "AClassification of Software Components Incompatibilities for COTS Integration"Software Engineering Laboratory Workshop, 1999.
[Yang 04] Yang, Y., Boehm, B., “Guidelines for Producing COTS Assessment Background, Process, and Report Documents,” USC-CSE-2004-502, Tech Report, Feb. 2004.
[Yang 05] Yang, Y. “Process Patterns for COTS-Based Development”, Proceedings, 2005 Software Process Workshop, Beijing, China. May 2005. Springer 2005.
[Yang 05b] Yang, Y., Bhuta, J., Port, D., Boehm, B.,: “Value-Based Processes for COTS-Based Development”, IEEE Software, July/August 2005.
[Yang 06] Yang, Y., Boehm, B., Wu, D.: “COCOTS Risk Analyzer”, Proceedings, 2006 International Conference on COTS-Based Software Systems (ICCBSS’04), Orlando, FL, USA, Feb. 13-17, 2006.
[Yang 06b] Yang, Y., Boehm, B., Clark, B.: “Assessing COTS Integration Risks Using Cost Estimation Inputs”, Proceedings, 2006 International Conference on Software Engineering (ICSE’06), Shanghai, China, May 20-28, 2006.
[Yang 06c] Yang, Y., Boehm, B.: “Optimizing Process Decisions via Risk Based Prioritization”, Proceedings, SPW/ProSim 2006, Shanghai, China, May 15-17, 2006.
118
Appendix A Empirical analysis results
Several empirical studies has been done on 5-year USC e-services projects as discussed in Section 3. This appendix lists the specific data examined in the empirical analysis, and results concluded from the analysis.
Table A.30 Data on CBA growth trend and CBA classification
Year# of total Projects
# of CBAs
% Of CBAs
# of AICBA ProjectID
# of TICBA ProjectID
# of GICBA ProjectID
a1997 11 3 0.27 0 0 3 7, 13, 17a1998 20 7 0.35 4 8, 13, 15, 20 0 3 4, 5, 9b1999 6 3 0.50 0 0 3 5, 13, 15a1999 19 8 0.42 2 16, 17 1 20 5 2, 3, 9, 19, 21b2000 8 4 0.50 4 2, 12,.19, 21a2000 17 8 0.47 4 5, 9, 12, 15 2 2, 21 2 3, 14b2001 7 4 0.57 0 1 21 3 3, 14, 17a2001 18 11 0.61 5 2, 5, 7, 8, 10 4 13, 19, 20, 21 2 9, 11b2002 10 7 0.70 2 7, 8 3 19, 20, 21 2 9, 11
Note: Only data from the Fall semester was used to draw the growth trend chart of Figure 3.1, which corresponds to the rows of year a1997, a1998, a1999, a2000, and a2001 from the above table. The details about the numbers in the above table are elaborated in Table A.2 – Table A.6 next.
Table A.31 Data on a1997 projectsProjectID ProjectName CBS Type COTSTeam 1a N/A
Team2aHancock Museum Virtual Tour Project Non-CBA
Team 3aSerial Control Record Builder for Serial Publications Non-CBA
Team 4a Statistical Chart Non-CBA Team 5a Virtual Tour Generator Non-CBA Team 6a N/A Team 7a Technical Report System ‘98 Gluecode CBA NCSTRL database
Team 8a
Inter-Library Loan (ILL) borrowing process at the Global Express Non-CBA SIRSI database
Team 9a N/A Team 10a Business School Working Papers Non-CBA MS AccessTeam 11a N/A Team 12a N/A
119
Team 13a
Virtual Education Library Assistant
Gluecode CBA
Microsoft Index Server, Windows Internet Information Server 3.0, Windows NT 4.0 Server, MS Access, Oracle, MS SQL Server
Team 14a N/A Team 15a Library Reference Interface Non-CBA
Team 16aInternational (French) Cross-Cultural Teaching Model Non-CBA
Team 17aNetwork Consultant system
Gluecode CBAplug-in network application:email editor, web brows
Table A.32 Data on a1998 projectsProjectID ProjectName CBS Type COTS
Team 1aDigital Document Creation and Storage Non-CBA MS Access
Team 2aSeaver Science Auditorium Scheduling System Non-CBA MySQL
Team 3a Asian Film Database System Non-CBA Team 4a Virtual USC Gluecode CBA SyBase
Team 5aHispanic Digital Archive
Gluecode CBAIBM Digital Library, Net.Data, AccuSoft ImageGear 98, Web Server
Team 6aAuthoring tool for the ADE system Non-CBA
Team 7a N/A
Team 8a
Metadata Creation for Digital Records: Assessment
CBA
IBM Digital Library, Berkeley Digital Library SunSITE, NASA Digital Library, JDK 1.5, JFC Swing API
Team 9a WWI AE Project Gluecode CBA IBM Digital LibraryTeam 10a Business Q&As Non-CBA
Team 11aDissertation Records Updating System Non-CBA SIRIS, UMI
Team 12aSeaver Science Auditorium Scheduling Non-CBA MS Outlook, Lotus Notes
Team 13aWWI-Access Enhancement Assessment
CBAMS Access, MS SQL Server, IBM Digital Library, Net.data, Web server
Team 14a Data Mining the Library Catalog Non-CBA
Team 15a
Voice Data Entry for Digital Metadata Assessment
CBA
Dragon Naturally Speaking, IBM Via Voice, Philips FreeSpeech98, Kuzweil Voice Commands
Team 16a New Book List Non-CBA ListPro
Team 17aARIEL to Web Document Delivery Non-CBA
Team 18a Doheny Book Locator Non-CBA Team 19a ADE Authoring Tool Non-CBA InfoMix, Star Office
Team 20asocial work current awareness service
Assessment CBA
MS Access, MS SQL Server, Oracle 7&8, Sybase, Infomix
Team 21a Dinner Reservation System 120
Table A.33 Data on a1999 projectsProjectID ProjectName CBS Type COTSTeam 1 Doheny Library Virtual Tour Non-CBA Sybase, QTVR Viewer
Team2Viewing utility for oversized scanned paper objects Gluecode CBA ER Mapper, MR. SID
Team 3ILL/OCLC Record Retrieval to Manage Circulation Process Gluecode CBA OCLC, SIRIS
Team 4
Digitizing and Access to Chinese, Japanese, Korean Materials Non-CBA
NJStart CJK Viewer, MagicWin 98 CJK Viewer, Unionway-Asian SuperPack 97, OCR Programs
Team 5Daily Summary of Japanese Press Non-CBA
Team 6URL Link hecker/Confirmation Utility(OO) Non-CBA
Team 7Hispanic Digital Archive Enhancement
Team 8Managing Multimedia Databases for Instruction and Research Non-CBA
Team 9ILL/OCLC Record Retrieval to Manage Circulation Process Gluecode CBA OCLC, SIRIS
Team 10 Serial Invoice Loader Non-CBA
Team 11Atomated New Book list generator Non-CBA CREN’s ListProc
Team 12
Chicano/Latino Serial's Collection
Gluecode CBA
Oracle, MS Access, SIRIS, ViewImages CGI and StoreImages Scripts
Team 13WEBCAT-Extension to third-party databases Non-CBA SIRIS' Webcat
Team 14Vacation/Sick Leaving track system Non-CBA MySQL
Team 15 Business Q&As Non-CBA MS Access, Cold Fusion
Team 16Bookstores online virtual Calendar
Assessment CBA Webevent, web calendar
Team 17
Bookstores On Line Email & Student-Student Communications
Assessment CBA
Imail Server 5 for WNT, Eudora Internet Mail Server, Mi'Server, CommuniGate Pro, iiWebMail; Ichat paging system, Ichat Rooms, Who's Who Abbot chat Server, ParaChat
Team 18Bookstores On Line Site and Ad Management Non-CBA Oracle
Team 19 Easy WINWIN Gluecode CBA Group System, Rational Rose, Mysql
Team 20Electronic Process Guide Authoring tool Tailoring CBA EPG tool
Team 21 MBASE Deliverables Manager Gluecode CBA Clearcase, MS Access
Table A.34 Data on a2000 projects121
ProjectID ProjectName CBS Type COTSTeam 1 Station Data Project for the Web Non-CBA MS Access DB Team 2 Web Site Portal and
Collaborative Filtering Tailoring CBAHyperwave Information Server
Team 3 Pathology Image Search Engine Gluecode CBA UMLS, Apache, MysqlTeam 5 Electronic Time Clock Assessment
CBAVitrix, Kronos, Journyx, Peoplesoft, SAP, Matrix
Team 6 Photocopied Table-of-Content Faculty Current awareness of services Non-CBA
Apache
Team 7 Wilson Dental Library New Book List Non-CBA
Web Server
Team 8 FullText Title Database Non-CBA Sybase, TomcatTeam 9 Smart Troubleshooting Assessment
CBAHtdig, Hyperwave
Team 12 Easy Audio/Video Creation and Manipulation
Assessment CBA
Imovie, Final Cut Pro
Team 13 Integrated Digital Archive Image Composer Non-CBA
Apache, LDAP, IDAIMG, IE, Netscape, IDA
Team 14 Access&Display Archive Image Composer
FileMaker Pro, Apache
Team 15 Integrated Digital Archive of LA/database design
Assessment CBA
Oracle, MS Access, Mysql
Team 16 IDA/LA Ingest/cateloging design component Non-CBA
ArcIMS, MySQL
Tem 17 Web Mail Gluecode CBA Tomcat, Apache, MS outlook, EnhydraTeam 18 Network Utilization Tool Ploticus(Graphing tool), MySQLTeam 19 Easy COCOMOII Non-CBA None. Web serverTeam 21 CSE Affiliates Private Area
Portal Tailoring CBAHyperwave
Table A.35 Data on a2001 projects
122
ProjectID ProjectName CBS Type COTSTeam 1 Wilson Dental Library Non-CBA MySQL, Apache
Team 2Collection Information System Assessment
CBABroadvision, Percussion, Hyperwave
Team 3Multimedia Equipment Scheduling Non-CBA
MySQL, Apache
Team 4Unsupported Mac/WIN Databases to Web Solutions Non-CBA
MySQL, Apache, Tomcat
Team 5
Digital Audio ReservesAssessment CBA
Windows Media Encoder, Windows Media Server, QuickTime Pro, QuickTime Streaming Server
Team 6ISD Web-based Contract Mgmt System Non-CBA
MySQL
Team 7Velero Station Data for Web-II Map Enhancement
Assessment CBA
assessment 18 GIS COTS (but none acceptable)
Team 8USC Collaborative Services Assessment
CBAeProject, dotproject, eRoom, eStudio; Blackboard, iPlanet
Team 9 Student/Staff Directory Gluecode CBA ZOPE, Mysql, Apache
Team 10Query Server for Electronic Resources
Assessment CBA
Thesus(QueryingServer), Apache
Team 11Collaborative Project Management Tool Gluecode CBA
MS Package, PSM, Hyperwave, RythmX, MS Project Central; iPlanet
Team 13MBASE Interactive Process Guide Authoring Tool Tailoring CBA
Spearmint, RPW
Team 15Strategic Risk-Value Assessment Tool Non-CBA
MySQL, Tomcat, Apache
Team 16Web-based SAL Conferrence Room Reserve System Non-CBA
MySQL
Team 19Opportunity Tree Experience Base Tailoring CBA
Hyperwave
Team 20 577MBASE in BORE Tailoring CBA BORE
Team 21Quality Management Through BORE Tailoring CBA
BORE
Table A.36 Data on development effort (Year: 2001-2002)Project ID 1 6 7 8 9 11 15 19 20 21
123
Man
agem
ent
Life Cycle Planning 2 16 13.5 86.5 15 9 38.9 45 152 42.5Control and Monitor. 2 21 18.3 52 24 20 47 15.8 47 18
Client Interaction 29.6 24 52.5 142.6 20 110 22.5 60.7 48 110.9Team Interaction 91.4 27 72.7 264.5 109 179 155.8 135.2 89 47.5Project Web Site 8 42 36.2 54 50 36.1 26.5 20.1 32.5 38
ARB Review 45 13 30.5 20.8 9 13.5 22 23.6 39 22En
viro
nmen
t an
d C
M Training. 122 14 67 12 25.5 3 17.5 77.2 32 51Install. and Admin. 3 32 9 10 0 5.5 12 0.5 0 5
Config. Management 24 10 12.8 25.5 19.5 5 15.25 10.4 16.5 19Custom Toolsmithing 0 0 20.5 2 4 0 0 0 0 1COTS Initial Filtering 0 0 14 154 1 8 1 0 2 0
Req
uire
men
t
WinWin Negotiations 0 0 3.5 3.5 0 1 0 0 4 1.5Prototyping. 15 4 0.5 0 0 2 0 43 0 3
Modeling for OCD 0 4 0 0 3 0 0.75 31.3 3.5 1Documenting of OCD 6 3.5 3.5 5 7 10 9.25 12 6.5 8
Modeling for SSRD 0 5 1 26 0 3 0.75 0 0 0
Documenting of SSRD 7 8 8 27 6 5 2 4 11 3.5
Des
ign Modelling for SSAD 28 27 4 5 3 10 12 25 1 8.5
Documenting for SSAD 9 28 10 33 16.5 7 68.25 27.5 9 8.5COTS Final Selection 0 0 5.5 65.5 0 5 0 0 0 0
COTS Tailoring 0 0 7 51 2 28 0 19.5 0 0
Impl
eme
ntat
ion Component Prototyping 9 22 3 3.5 1 30 27 28.1 26 4
Comp. Implementation 274 175 2 0 81 11 102 172.2 187 129.5COTS glue code 0 0 2 0 20 26 15 8.6 0 0
Ass
essm en
t
Business Case Analysis 3 2 0 41 0 11 0 7 0 1FRD 3 1 12.3 65 2 15 8.5 9 11 10
Test Planning 9 20.5 10.8 34.4 23 10 82.75 20 43.5 15Imspection&Peer Review 71 9 43 0 56 11 20.5 32 71 27Trans.&Support Planning 23 32.5 16.8 38.5 15 31 34 92.7 64 108
Use
r de
fined
User_Def_1 76 43.5 16.3 23.5 6 166 260.5 88.1 208 89
User_Def_2 7 6 3 9.5 0 14.1 13 4 12 7Sum 867 590 499 1255 518 784 1015 1013 1115 779.4
Sum of COTS effort 0 0 28.5 270.5 23 67 16 28.1 2 0Adjust sum of COTS effort 83 49.5 47.8 303.5 29 247 289.5 292.4 409 225.5
COTS Effort % 0.10 0.08 0.10 0.24 0.06 0.31 0.02 0.29 0.37 0.29
This set of data was used to derive the effort boundary in the CBA definition in Section 3.1.2(on page 26). Note the big gap between 0.1% of COTS related effort in non-CBA development and 24% of COTS related effort in CBA development.
Table A.37 Data on COTS Activity Sequences (Year: 2001-2002)Project Project Name a2001 b2002
124
ID
7
Velero Station Data for Web-II Map Enhancement week3-4:Assessment;
week8: non-COTS solution chosen;week12: newly introduced COTS
week1-6: COTS assessment;week7-10: Tailoring;week8-10: glue coding;week10: CCD proved COTS didn’t well satisfy req.;week11-12: redirect to non-COTS solution
8USC Collaborative Services
week2-4: Initial filtering;week5-8: customizing dotProject;week8: redirect project to Assessment;week9-12: detailed COTS assessment
week1-4: study new COTS candidates introduced by client;week4-7: setting up testing environment-installation and configuration;week8-15: evaluation testing
9
Student/Staff Directory
Week2: COTS selected;week8: COTS wont work as expected;week9-12: prototyping to evaluate project feasibility Integrate zope
11
Collaborative Project Management Tool
Week3-9: initial filtering;week9-12:detailed evaluation
week1-5: assessing COTSs for content mgmt and pj mgmt;week6: install iPlanet;week6-7: research DocuShare;week8: tailoring DocuShare;week8: glue DocuShare and iPlanet;week9-10: re-evaluate DocuShare due to exposed risks;week11-12: tailoring Docushare
13 PLASMA
week1,2: evaluate Spearmint;week3-8: create Spearming entities, prototype with Spearmint;week9: decompiling Spearmint to explore other features;week10:evaluate little-JIL;week11-13:configuring Spearmint N/A
19 Opportunity Tree
week1-3: study of COTS-hyperwave;week3-: prototyping
Tailoring*;Glue code development*;(*not specified in progress report, concluded from the effort data)
20 MBASE IN BORE
week1-3: learning and experimenting BORE;week3-11: prototyping in BORE;week10: prioritizing requirements with respect to BORE performance N/A
Table A.38 Data on COTS activity sequence 125
ProjectID* Team No. Semester Sequence Notes
1 7 a2001-b2002 ACATGC
No COTS solution first, then introduced COTS, finally follow non-COTS solution
2 8 a2001-b2002 AT(A(TG))Detailed COTS assessment by performing T and G
3 9 a2001-b2002 A(TG)AG 4 11 a2001-b2002 A(TG)A(TG) 5 13 a2001 ATAT 6 19 a2001-b2002 ATG 7 20 a2001-b2002 AT 8 21 a2001-b2002 AT
9 OIV a1999 AT(AA)A(TG)(TGC) See the detailed example in Section 3.5.
* This ProjectID refers to those in Table 3.1. The TeamNo refers to the actual team number.
Table A.39 Data on COTS effort distribution in USC projectsProjectID*Team No AssessmentTailoring GlueCode Year
12 Team 1 188 0 9.5 2002-20031 Team 8 382.8 57 0 2001-200213 Team 3 39 1 6 2002-20032 Team 7 32 7 23 Team 13 49.8 22.6 04 Team 14 23 7 5.55 Team 11 53 29 26 2001-20026 Team 21 2.6 54 1 2000-20017 Team 21 6.5 225.5 0 2001-20028 Team 20 6 407 0 2001-20029 Team 19 6 26 8.6 2001-200210 Team 3 20.5 14 1214 Team 6 23 12 26 2002-200311 Team 9 14 3 22 2001-200215 Team 14 56 19.5 82 2002-200317 Team 22 21 4 72 2002-200316 Team 19 4 2 103 2002-2003
* This ProjectID refers to those labels along the X-axis in Figure 3.2. The TeamNo refers to the actual team number.
126
Appendix B Guidelines for developing assessment intensive CBAs
1. IntroductionThe Guidelines for COTS/NDI Assessment-intensive project will be introduced in this document. Please read the following general guidelines carefully, before proceeding to the guidelines for the individual deliverables.
1.1 ScopeThe Guidelines for Producing COTS Assessment Background, Process, and Report Documents (CAB, CAP, and CAR) apply to projects that need to assess the relative merits of commercial-off-the-shelf (COTS), non-developmental-item (NDI), and/or other pre-existing software products/components for use in a software system. Basically, COTS/NDI assessment activity takes place in the following two primary situations:
As part of the software process for a COTS-based development project that follows general guidelines such as Dynamic Systems Development Method (DSDM), Feature Driven Development (FDD), Model-Based Architecting and Software Engineering (MBASE), Rational Unified Process (RUP), or Team Software Process (TSP);
As a standalone COTS/NDI assessment activity to serve as the basis for future project decisions.
The guidelines cover the above two primary situations and were developed by integrating COTS-based development lessons learned and proven engineering practices to help teams prepare, plan, manage, track, and improve their COTS assessment activity.
1.2 DefinitionsCommercial-Off-The-Shelf (COTS). We adopt the SEI COTS-Based System Initiative’s definition [1] of a COTS product: A product that is:
Sold, leased, or licensed to the general public; Offered by a vendor trying to profit from it; Supported and evolved by the vendor, who retains the intellectual property
rights; Available in multiple identical copies; Used without source code modification.
Non-Developmental Item (NDI). NDI’s are used in a similar manner to COTS products, but with one or more of the COTS criteria handled differently. Examples are:
127
Open source software: Has COTS product attributes a, c, and d; but not b, and optionally not e (making your own changes to open source software means that you have to integrate these changes with all future open source product releases).
Customer-furnished or corporate-furnished software: has COTS product attributes c and d, but not a and b, and optionally not e.
Reuse-repository software: can assume many forms, with practically any combination of COTS product attributes a, b, c, d, and e.
COTS-Based System (CBS). CBS is generally defined as “any system, which includes one or more COTS products.” This includes most current systems, including many which treat a COTS operating system and other utilities as a relatively stable platform on which to build applications. Such systems can be considered “COTS-based systems,” as most of their executing instructions come from COTS products, but COTS considerations do not affect the development process very much.COTS-Based Application (CBA). To provide a focus on the types of applications for which COTS considerations do affect the development process, we define a COTS-Based Application [3] as a system for which at least 30% of the end-user functionality (in terms of functional elements: inputs, outputs, queries, external interfaces, internal files) is provided by COTS products, and at least 10 % of the development effort is devoted to COTS considerations. The numbers 30% and 10% are not sacred quantities, but approximate behavioral CBA boundaries observed in historical CS577 projects. Obviously, CBA projects are only a subset of CBS projects.COTS Activity. In our seven years of iteratively defining, developing, gathering project data for, and calibrating the COCOTS cost estimation model, we have identified three primary sources of project effort due to CBA development considerations. These are defined in COCOTS as follows:
COTS Assessment is the activity whereby COTS products are evaluated and selected as viable components for a user application.
COTS Tailoring is the activity whereby COTS software products are configured for use in a specific context. This definition is similar to the SEI definition of “tailoring” [4].
COTS Glue Code development and integration is the activity whereby code is designed, developed, and used to ensure that COTS products satisfactorily interoperate in support of the user application.
2. ContextCOTS and NDI are similar in that their use can significantly reduce software development costs and schedules by avoiding the need to develop their capabilities. But using several COTS/NDI components involves a number of significant risks, including problems with incompatibilities, controllability, release synchronization, and life cycle support.
128
Using COTS/NDI also has significant impacts on software processes, especially those of COTS-Based Applications. For example, unlike traditional type of application development, having specific and frozen requirements set in the early stage is risky for COTS/NDI based development because COTS/NDI may introduce more changes later on. These changes stem from mismatch between COTS/NDI and the desired system capabilities discovered afterwards, new releases of COTS/NDI products, newly introduced COTS/NDI products, COTS/NDI market sectors changes, or application infrastructure changes.The choice of COTS/NDI components will often change a system’s originally-defined requirements and architecture by making alternative COTS/NDI options more cost-effective. This establishes the need addressed here for a COTS/NDI assessment process to determine the right combination of COTS/NDI components, requirements, and architecture for the system.Most COTS/NDI assessments will be carried out in the context of software system development. In this case, the contents of the three key COTS/NDI guideline documents can be simplified by integrating them with the contents of whatever development guidelines are being used for development (e.g. DSDM, FDD, MBASE, RUP, TSP). But in some cases, a customer will want only a standalone COTS/NDI assessment, to serve as a basis for future development decisions. These Guidelines are written to cover this self-contained case.COTS/NDI assessments are also carried out in an overall process context that also includes COTS/NDI tailoring, glue code development and integration. This process context is described with examples in [3] and the detailed process guidelines for COTS tailoring, and COTS glue code is under development and will be available soon in separate documents.
3. Key Documents
The key documents involved in COTS/NDI assessment are the COTS/NDI Assessment Background (CAB), COTS/NDI Assessment Process (CAP), and COTS/NDI Assessment Report (CAR). They are defined in a minimum-essential, tailoring-up fashion rather than in an exhaustive, tailoring-down fashion. As an MBASE example, the minimum-essential content of the CAB is Sections 2 (Shared Vision), 4.3 (Capabilities), and Section 4.4 (Levels of Service) in the Operational Concept Description (OCD). These provide the minimum-essential set of objectives, constraints, priorities, and situation background needed to perform the COTS/NDI assessment. All of the rest of the MBASE OCD, SSRD (requirements), and SSAD (architecture) documentation is optional, to be added only when the risks of not documenting something outweigh the COTS assessment costs and risks of documenting it.
The CAB, CAP, and CAR are living documents. They should be lightweight and updated whenever new risks, opportunities, or changes emerge.
129
COTS/NDI Assessment Background (CAB) Document
PrefaceThe CAB document is to describe the organizational needs and constraints, as well as conditions of the related commercial market sections. It should provide the minimum essential set of organization objectives, constraints, priorities (OC&P’s), and situation background needed to perform a COTS/NDI assessment. It can be tailored up on a risk-driven basis to cover special COTS/NDI assessment concerns.
1. Introduction1.1 Purpose and ScopeThis section summarizes the purpose of the CAB document, the scope of the COTS assessment, and identifies the project stakeholders and possible COTS candidates. This should includes “The purpose of the document is to provide the minimum essential set of
objectives, constraints, priorities, and situation background needed to perform a COTS/NDI assessment for the [name of system].
Its scope covers the COTS/NDI assessment aspects of the [name portions] portions of the system plus the additional complementary activities of [name activities*].
(Optional) Current life cycle phase and milestone. The description of project stakeholders, including their names, organizations,
titles and roles in the COTS assessment. The list of current COTS candidates.”
* Example complementary activity: market trend analyses and product line analyses.
1.2 ReferenceProvide a complete set of citations to all sources used or referenced in the preparation of this document. The citations should be in sufficient detail that the information used in this document can be traced to its source. Sources typically include books, papers, meeting notes, tools, and tool results.
1.3 Change SummaryFor each version of the CAB document, describe the main changes since the previous version and briefly explain why. The goal is to help a reviewer focus on
130
the most critical parts of the document needing review. The following change sources must be included:
OC&P’s changes (OC&P’s newly introduced, removed, relaxed, and/or reprioritized);
COTS changes (COTS candidates added, discarded, and/or updated); COTS Vendor changes (new Vendor, new Vendor claim, or vendor support
changes); Other changes.
2. Overall System Objectives, Constraints, and PrioritiesFor COTS intensive projects, overall system OC&P’s work not only as the shared vision among all stakeholders, but also as the basis from which the COTS evaluation criteria and testing scenarios are established. Additionally, COTS-assessment intensive projects are usually holding a high degree of requirement flexibility for the sake of a better compromising and balancing between what available best COTS options can provide and what the clients want and can afford.
Hence, for COTS assessment intensive projects, though the details of the OC&P’s may need to be modified at every assessment cycle, it is very important to keep the OC&P’s clearly stated so that the COTS assessment can starts with the essential OC&P’s set and the stakeholders can negotiate and reexamine them when reviewing intermediate assessment results. This way, the OC&P’s can get continuously refined as the assessment goes into further details.
2.1 System Objective Description (SO)A concise description of the system objectives is presented here, focusing on critical system capabilities in terms of customer’s needs, and relating them to evaluation criteria. It should take the following form:
For (target customer) Who (statement of the need or opportunity) The (project name) is to investigate the feasibility of using (COTS/NDI
categories) in the (product name) That (statement of key benefit-that is, compelling reason to selecting and
integrate the recommended solution) Unlike (developing from scratch or other comparative alternatives) Our product (statement of primary advantages resulted from COTS/NDI
assessment)
Here is an example for a collaborative services system: “Our client from xxx organization need a stronger, user-friendly online collaborative services system to better support its faculty, stuff, and students’ tremendous amount of daily collaborative work. Our project would search for some COTS collaborative
131
packages, and perform thorough assessment on their fitness for xxx Collaborative Services System. Our final COTS recommendation to the client, xxx, unlike other same kind COTS products, ours should be mature, easy to integrate, and cost-effective for our client.This section should briefly discuss the resulted organizational benefits in terms of future profitability, reputation, or market capitalization with the investment of the system to be built. Next section about “Results Chain” helps to explain these benefits realization path.Results Chain starts from the initiative of performing COTS assessment rather than implementation of the system. Some assumptions and contributions related to COTS assessment activity may also be identified in the Result Chain diagram.
Figure B.27 Benefits Realization Approach Results ChainFigure B.1 shows a simple Results Chain provided as an example for COTS assessment projects. It establishes a framework linking Initiatives that consume resources (e.g., performing COTS assessment on online collaborative services/COTS products) to Contributions (not delivered systems, but their effects on existing operations) and Outcomes, which may lead either to further contributions or to added value (e.g., increased savings due to higher productivity by integrating the best COTS solution). A particularly important contribution of Results Chain is the link to Assumptions, which condition the realization of the Outcomes. The Result Chain provides a valuable framework by which your project can work with your clients to identify additional non-software initiatives that may be needed to realize the potential benefits enables by the project initiative. These may also identify some additional success-critical stakeholders who need to be represented and “brought into” the shared vision.
2.2 Key Stakeholders
Assessingcollaborative
services COTSproducts
Gaining morecomplete project
visions
Initiative
Initial filtering ofavailable COTS
Intermediate outcome
Best solutionavailableDetailed
Assessment
Outcome
Contribution
Selected COTS isproved cost-
effective.OC&P's are flexibleto match selected
COTS features.
Assumption
132
Identify each stakeholder by their home organization, their authorized representative for project activities, and their relation to the Results Chain. The four classic stakeholders are the software/IT system’s users, customers, developers, and maintainers. Specifically, for a COTS assessment-intensive project, key stakeholder should include the following parties:
1. Customer: Project initiator.2. User: The people that are going to use the proposed system,
it is recommended that the COTS assessment should involve the end user into the evaluation process, present evaluation results (intermediate & final results) to them, collect COTS usage information (if trial version is available) from them, and perform appropriate training for them to experiment COTS(s).
3. Domain experts: A new and important stakeholder role introduced for COTS assessment projects. These are the people from the customer organization that can provide feedbacks and advices for solving some domain-specific problems, such as helping in acquiring and prioritizing OC&P’s, or helping to reconcile conflicts between COTS product and desired OC&P’s during COTS assessment process.
4. COTS Experts: A new and important stakeholder role introduced for COTS assessment projects. These are the technical representative(s) from the COTS vendor side that can provide technical help, such as helping in answering questions during the process of obtaining, installing, experimenting with, and analyzing the test version of the COTS product. For example, technical from COTS vendors are recommended to involve into the evaluation process.
5. COTS vendors: A new and important stakeholder role introduced for COTS assessment projects. They provide inputs of COTS information (such as functionality, performance, price model, and possible future evolution direction, etc.) and test version of the product to be assessed.
6. Other necessary stakeholder like system administrator and maintainer, the definition and responsibility of them are similar to traditional development projects. Note that their specific responsibilities and activities may be highly COTS-dependent.
7. Evaluator: usually called “developer” in cs577 projects. They need to learn and master certain knowledge besides conventional software development scope, such as ability to explore information through market survey, to make regular contact and question & answering sessions with COTS vendor, domain experts, and COTS experts, to report and renegotiate with the customer according to the progress of COTS assessment, and to make final COTS recommendation to the customer.
133
Additional stakeholders may be system interfaces, subcontractors, suppliers, venture capitalists, independent testers, and the general public (where safety or information protection issues may be involved).
Common Pitfalls:
Not paying much attention and effort in order to get the newly introduced COTS assessment stakeholders involved. Many of the delays in schedule for COTS intensive projects are due to the inability to hook with COTS vendors, domain experts, or COTS experts along the assessment process to collect as complete information about COTS as possible and regularly collect for updates as well.
Being too pushy or not pushy enough in getting your immediate clients to involve the other success-critical stakeholders. Often, this involves fairly delicate negotiations among operational organizations. If things are going slowly and you are on a tight schedule, seek the help of your higher-level managers.
2.3 COTS Assessment Boundary and EnvironmentThe system boundary distinguishes between the services your project will be responsible for assessing the fitness of the candidate COTS solution(s), and the stakeholder organizations, COTS vendors and interfacing systems for which your project has no authority or responsibility, but with which your project must coordinate to realize a successful COTS assessment and its resulting benefits.
Figure 2 shows the COTS assessment context diagram used to define the system boundary. It shows the type of information that may be included in COTS intensive project’s context diagram, but is not intended to be a one-size-fits-all template.
Figure B.28 COTS assessment context diagram
<Top-levelsystem
objectives>
ServiceUsers
SystemAdministrator
SystemMaintainer
COTSVendors
DataSources
Criticalinterfacingsystems
134
The Context Diagram for the COTS assessment should include entities for all the key operational stakeholders described above (CAB 2.2)
The “Top-level system objectives” box defines the boundary of desired system. It should include the list of top-level system capabilities that your COTS assessment will be based upon.
Common Pitfalls: Including system design details
2.4 Major Project ConstraintsDescribe any constraints that are critical to the success of COTS assessment, such as:
Table B.40 Project Constraints SpecificationIdentifier: <<Give a reference number and name>> such as “CST-1”Name: Limited BudgetDescription: <<Describe the constraint>>, such as “The customer organization is only
willing to buy COTS within $25k.”Influence and on COTS assessment:
<<Indicate how this constraint can affect the COTS assessment>>, such as “Need to add an evaluation criteria to filter out those COTS candidates that costs more than $25k”
Measurable: <<Indicate how this constraint can be measured with respect the specific elements it addresses, or what needs to be looked at within the project to see that the constraint has been adhered to >> E.g., “Cost analysis can provide estimation cost along each solution”,
Relevant: Relative to COTS Evaluation Criteria.Specific: <<Describe what particular aspects of the constraint>> e.g., “$25k”.
(There is no need to repeat such information if it is obvious from the above information.)
3. Domain/Organization DescriptionThis section should refine the shared vision discussed in section 2. It should provide sufficient detail so that the stakeholders will understand the rationale for the COTS assessment, and will be able to evaluate the goals of the COTS assessment, and the success factors for the COTS assessment.This section should be written using terms understood by all the stakeholders in the project, especially customers and domain experts. Do not become so technical that customers and high–level managers cannot understand.
3.1 Organization BackgroundProvide a brief overview (a few sentences) of the organization(s) sponsoring the COTS assessment of the system, the organization that will be the users of the system, and the organization that will be maintaining the system. (These organizations may or may not be the same organization.)
135
Recommendations
Consider using each group’s mission statements and/or their objectives and goals as the basis for this section.
Do not get carried away with details that are not centrally relevant to the new system.
3.2 Current Organization EnvironmentIn the following sections, describe the current environment of the organization. (This model is sometimes called an “as is” model) The description should include the structure, artifacts, processes, and shortcomings of the organization’s environment, which should be satisfied and addressed by one COTS solution obtained through the COTS assessment.
3.2.1 StructureBriefly describe the current workers (e.g. people roles, systems) of the organization and the outside actors that interact with the organization (e.g. customers, suppliers, partners, prospects). Each worker and outside actor should be relevant to the current organization’s processes (CAB 3.2.3). Identify which workers and outside actors interact with other workers and actors. Provide a brief description of its role, purpose, and responsibilities.
Recommendations:
1. Give each worker and outside actor classifier a name that expresses the responsibilities of its instances.
2. A good name is usually a noun (e.g. “librarian”), a short noun phrase (e.g. “department secretary”), or the noun form of a verb (e.g. “administrator”).
3. Avoid names that sound alike or are spelled alike, as well as synonyms.
4. Clear, self-explanatory names may require several words.
5. To facilitate traceability, assign each worker or outside actor classifier a unique number (e.g. Worker-1, BActor-10).
3.2.2 ArtifactsBriefly describe the current artifacts (e.g. documents, products, resources) inspected, manipulated, or produced by the organization, and the relations among the artifacts. Each artifact identified should relevant to the current organization’s processes (CAB 3.2.3).Your customer can give you information about the current artifacts.
What are the major documents inspected, manipulated, or produced by the organization? What are the resources used in production by the organization? What are the items are produced by the organization? What is its general purpose, role, or description for each artifact?
136
3.2.3 ProcessesDescribe the operational processes within the current organization used to fulfill its business goals. For each process, identify which workers and outside actors participate in the process, and which artifacts are inspected, manipulated, or produced by the process; and describe at a high level the actions performed by each worker and outside actor during the process.A major objective of the process model is to provide a context for evaluation criteria and test cases of COTS assessment to be developed in CAR section 4. For example, for a process of “the proposed system will eliminate or make efficient manual order entry and verification steps” described here, in CAR section 4, there should be a related evaluation criteria or business test scenario to this process.
Representation:
Create a list of process names and a story to describe each process. Each story is a paragraph describing “something the system should do” (used in Xtreme Programming [6] and some Agile Methods).
Table B.41 Business Use Case DescriptionIdentifier: Unique identifier for traceability (e.g. Process-xx)Use-Case Name: Name of use–casePurpose: Brief description of purposePriority: Relative importance of the process to the businessFlexibility: Must-have or nice-to-haveWorker or outside actor:
List of worker or outside actors participating in the use–case
Pre–conditions: Description of state that workers, outside actors, and artifacts should be in before use-case performed. (informal text, OCL2, or both)
Post–conditions: Description of state that workers, outside actors, and artifacts are in after use-case performed. (informal text, OCL, or both)
Actions Performed: Describe actions performed by a worker or an outside actor.
Recommendations:
1. Describe only those processes that are relevant to the proposed system (e.g., processes that the proposed system will participate in).
2. Give each process a name that expresses the behavior that it represents.
3. A good name is usually a verb, or verb phrase.
4. Each name must be unique relative to the containing package. (Two classifiers in different packages can have the same name.)
2 UML’s formal specification language, called an Object Constraint Language (OCL).137
5. Avoid names that sound alike or are spelled alike, as well as synonyms.
6. Clear, self-explanatory names may require several words.
7. To facilitate traceability, assign each process a unique number (e.g. Process-01 or BUC-01).
8. Avoid overly technical– or implementation–related processes unless they are already present in the current system. An example of an appropriate level process for an Order Entry System would be Add New Sales Item To Order.
Common Pitfalls:
Describing a worker or customer in a process description and not describing the worker or customer in the structure of the current organization (CAB 3.2.1).
Describing an artifact in a process description and not describing it in the artifacts of the current organization (CAB 3.2.2).
Including processes that are not included in the Prioritized System Capabilities (CAB 4), and are not listed as evaluation criteria or COTS test case in (CAR 4).
Including design details about the process. For example, specifying exactly how user validation will be performed.
3.2.4 ShortcomingsDescribe limitations of the current organization environment and current system, if it exists. Focus on how the organization environment and current system needs to be improved, or replaced by a COTS-based solution, which is to be discovered through the COTS assessment effort.
Recommendations:
1. Clearly and concisely describe each shortcoming.2. To facilitate traceability, assign each shortcoming a unique number (e.g. Shortcoming-01).
4. Prioritized System CapabilitiesCapabilities define broad categories of system behaviors, should realize the high–level system objectives described in COTS Assessment Boundary and Environment (CAB 2.3).
Since the Proposed system is greatly COTS-dependent, it’s better not to go into too detailed or be too specific when describe system capabilities, entities, or activities.
Bear in mind that keep the system capabilities flexible, COTS-driven when thinking and documenting.
For the proposed system, project goals, constraints, capabilities, and level of services are first identified during the win-win session between customer and evaluators.
Some of system capabilities produced through win-win negotiation might be proved wrong, biased or unachievable along the progress of the project, given the great uncertainties within
138
COTS candidates will be detected gradually. In this case, introducing more sessions of win-win negotiations among key stakeholders to refine your system capabilities will be a recommendable approach.
Describe a few capabilities and work with domain experts, and operational stakeholders, to clarify and refine them. As more capabilities are documented, architects get a better idea of how those people view the proposed system (I.e. the conceptual system from their perspective).
Minimum information for each system capability is as indicated in the following suggested template:
Table B.42 Prioritized System Capability SpecificationIdentifier: Unique identifier (e.g. PSC-x)Name: Name of capabilityDescription: What the capability allows the operational stakeholders to doImportance: Relative importance of the capability:
(e.g. 1..n or Primary | Secondary | Optional)Flexibility: Is this capability: must-have or nice-to-have?Used In Reference to organization processes (CAB 3.2.3) that use the capability.
Common Pitfalls:
Including a system requirement Including a system Levels of Service goal Including a system behavior Including a lot of capabilities for a small system (some of them are likely to be either
system requirements or system behaviors)
5. Desired and Acceptable Levels of Service Define levels of service required in the System (i.e., “how should” and “how
well” the system performs a given capability). Indicate how the Desired and Acceptable Levels of Service are relevant to the
Objectives, Constraints, and Priorities. Levels of Service correspond with Spiral Model Objectives or in some cases
constraints, as when the level is a non-negotiable legal requirement. It is recommended to specify both acceptable and desired quality levels, and
leave the goals flexible to produce the best balance among Level of Service Requirements (since some Level of Service Requirements conflict with each other, e.g., performance and fault-tolerance) among different COTS solutions. Levels of Service should be M.R.S. (Measurable, Relevant, Specific).
Measures should specify the unit of measurement and the conditions in which the measurement should be taken (e.g., normal operations vs. peak-load response time). Where appropriate, include both desired and acceptable levels. Again, don't get too hung up on measurability details.
139
Table B.43 Levels of Service SpecificationLevel of Service: Give a reference number and name, such as “LS-1: Response time”Description: Describe the level of service, such as “1 second desired; 2 seconds
acceptable”Degree of Flexibility:
Desired or acceptable
Measurable: Indicate how this goal can be measured with respect the specific elements it addresses – include as appropriate baseline measurements, minimum values, maximum values, average or typical or expected values, etc., such as “time between hitting Enter and getting useful information on the screen”
Relevant: Describe which system capabilities (CAB 4) this level of service is relevant to, such as “larger delays in order processing (see capability 3 in CAB 4) cause user frustration”
Specific: Describe what in particular within the system capabilities (CAB 4) this level of service addresses, such as “credit card validation may cause significant delay when attempting to connect to the verification service”
Common Pitfalls: Overburdening the system with Levels of Service that are not validated by the
customer Including superfluous Level of Service goals. Table B.5 shows typical
stakeholder concerns for Level of Service. Levels not satisfying the M.R.S. criteria
Table B.44 Stakeholder Roles / Level of Service Concerns RelationshipStakeholder Roles and Primary
ResponsibilitiesLevel of Service Concerns
Primary SecondaryGeneral Public Avoid adverse system side
effects: safety, security, privacy.
Dependability Evolvability & Portability
Operator Avoid current and future interface problems between system and interoperating system
Interoperability, Evolvability & Portability
Dependability, Performance
User Execute cost-effective operational missions
Dependability, Interoperability, Usability, Performance, Evolvability & Portability
Development Schedule
Maintainer Avoid low utility due to obsolescence; Cost-effective product support after development
Evolvability & Portability Dependability
140
Developer Avoid non-verifiable, inflexible, non-reusable product; Avoid the delay of product delivery and cost overrun.
Evolvability & Portability, Development Cost & Schedule, Reusability
Dependability, Interoperability, Usability, Performance
Customer Avoid overrun budget and schedule; Avoid low utilization of the system
Development Cost & Schedule, Performance, Evolvability & Portability, Reusability
Dependability, Interoperability, Usability
COTS/NDI Assessment Process (CAP) Document
PrefaceThe CAP document is organized in the same form used in MBASE plans. It covers the minimum essential “why/whereas, what/when, who/where, how, and how much” aspects of the activity being planned. It can be tailored up on a risk-driven basis to cover special COTS/NDI assessment concerns. However, it generally does not need detailed work breakdown structures, monitoring and control plans, configuration management plans, iteration plans, or quality management plans. The CAP is a living document, and should be updated as new risks, opportunities, or changes emerge.
1.Introduction
1.1 Purpose, Scope, and Assumptions (why, whereas)The purpose of this document is to provide the minimum essential set of plans needed to perform a COTS/NDI assessment for the [name of system].
Its scope covers the COTS/NDI assessment aspects of the [name portions] portions of the system plus the additional complementary activities of [name activities].
The following assumptions must continue to be valid in order to implement the plans below within the resources specified:
[list assumptions]
1.2 ReferenceProvide a complete set of citations to all sources used or referenced in the preparation of this document. The citations should be in sufficient detail that the
141
information used in this document can be traced to its source. Sources typically include books, papers, meeting notes, tools, and tool results.
1.3 Change SummaryFor each version of the CAP document, describe the main changes since the previous version and briefly explain why. The goal is to help a reviewer focus on the most critical parts of the document needing review. The following change sources must be included:
Activities changes; Schedule changes; Resource changes.
2. Milestones and Products (what; when)This section describes the COTS/NDI assessment process for the [name system] system and the rational behind such process. It should contain schedules and milestones that indicate when each function and product will be completed.
2.1 Overall StrategySummarize and rationalize the overall COTS/NDI assessment strategy, including for each class of product being assessed:
Specify the number and length of down selection iterations; Specify the list of COTS attributes to be used as evaluation criteria,
ordered by the importance of each attribute; and Specify the types of assessment activities to be performed, example
activities including:o Reference checko Document reviewo Supplier inquiryo Demoo Analysiso Execution testo Prototype application
Evaluation criteria and weights are established based on stakeholder-negotiated OC&P’s for the system. Stakeholders also agree on the business scenarios to be used for the assessment. For a checklist of the strategy elements mentioned above, such as COTS classes, COTS attributes and their definitions, please refer to Appendix A “COCOTS Assessment component”.
Besides, identify complementary activities and how they synchronize with the COTS/NDI assessment. Examples are market trend analyses and product line analyses.
142
2.2 Milestones and SchedulesIdentify and sequence the COTS/NDI assessment planning, preparation, execution, and analysis activities, and show how they interact with external events involved in complementary assessment activities and overall milestone schedules. Prepare and keep up-to-date a Gantt chart and PERT chart of the activities and schedules, using Microsoft Project or the equivalent. Identify critical path dependencies (e.g., interoperability testing requires executable products, facilities, equipment, probably tailoring and glue code, test drivers, key personnel, etc.) as candidate risk items for Section 4.2.
An elaborated risk-driven Win Win Spiral model shown in figure 3-1 highlights the critical activities of a COTS assessment process that should be scheduled.
Figure B.29 Elaborated Spiral Model for COTS Assessment Project
The following is an example of the milestone and schedule description for a spiral cycle of a COTS assessment project (Note that blue lines show new and important tasks):
Inception (2) Phase (Beginning of spiral cycle # 1 for 577b)At this iteration it is required to map our plans to the win win spiral in a more detailed way. In addition to this it is required to define more detailed reviews where includes exit criteria with the quality management plan.
143
Identifying Stakeholders and system OC&P1/26/03 Checking new stakeholders of the project1/27/03 Team Formation2/1/03 Review of MTA Criteria2/2/03 Review of COTS Evaluation Criteria2/4/03 Client Feedback2/5/03 Feedback From Professors2/6/03 Defining Exit Criteria 2/6/03 Use Spiral Modeler for maintaining the Spirals 2/10 RLCA draft on the Web
Evaluating Alternatives with respect to OC&P and Elaborating Product (These 2 steps are done parallel)
2/12 Address Possible Risks of the Project2/16 MTA Template changes2/16 Continuing COTS evaluation for HP requirements (Oracle, Stelelent)
Verification and Validation of the product2/20 RLCO (RLCA) ARB 2/23 Peer Review of MTA and Evaluated COTS2/24 IV&V Review
2.3 DeliverablesIdentify the content, completion criteria, and due dates of all deliverables. These could include interim and trial versions of the COTS/NDI Assessment Report (CAR), or reports summarizing market trend analyses or product line analyses. In some cases, customers may want reasonably documented versions of your evaluation software for continuing assessment purposes.
3 Responsibilities (who, where) Identify key stakeholder roles and individuals, and summarize their responsibilities in planning, preparing, executing, analyzing, or reviewing the COTS/NDI assessment and related activities. Summarize the objectives, content, and stakeholder roles involved in key milestone reviews.
Table B.45 Stakeholders and responsibilitiesStakeholders Stakeholders’ responsibility When Where
Evaluator
Creating, modifying, using, and reporting evaluation criteria&results to the client
Research into COTS market sectors; consult with COTS vendors or business domain experts; compare available features of each COTS package; analyze and provide a reproiritized list of features for the proposed system
win win negotiation and client meeting;
In parellel with the evaluation process, provide information for helping decision making in evaluation reviews
Team meeting and client meeting
One team member should be assigned responsible for this task, reporting for team leader and other stakeholders
144
Client Facilitate evaluator's creating, refining of template, review templates
Factilitate the research, the contract issues of COTS candidates, review analysis results
Client meeting
Client meeting, ARB
Provide feedback about the templates
COTS Vendor Providing COTS documentation, price
model, possible future release info, and helping to identify difficulty of meeting requirements gap by tailoring and glue code development
Any time as needed and regularly
Contact by email, phone, meeting
Domain Expert
Help in prioritizing requirements according to COTS provided features, and identifying the missing requirements from COTS
Any time as needed and regularly
Contact by email, phone, meeting
For the assessment performers, identify their specific responsibilities for types COTS/NDI products assessed, aspects assessed, types of assessment, and associated support activities.
4 Approach (how)4.1Assessment Framework4.1.1 InstrumentsProvide a description of the assessment instruments (manual forms, Web forms, spreadsheets, etc.) to be used in performing the assessments. For example, the following table shows the description of the instruments used in a COTS assessment project:
Table B.46 Examples of COTS assessment instrumentsAssessment Instrument DescriptionEvaluation Criteria and Weights
A Spreadsheet file that is created from system OC&P,’s and maintained by the 577 team.
COTS Product Literature Product manuals obtained from vendor website/email/mail contact.
COTS Demos Available from the vendor website/email/mail contact.
Evaluation Data Collection Form
A Spreadsheet file that is created from evaluation criteria, and used by the 577 team to fill in evaluation data for each COTS candidate.
4.1.2 Facilities
145
Identify the facilities, equipment, software, COTS licenses, and procedures involved in each type of assessment, particularly execution tests and prototype applications.For example, provide a description of facilities required as table below:
Table B.47 Examples of facilities descriptionFacilities DescriptionHardware Computer(s) with access to the internet.Software Describe any software required in order to set up
the COTS assessment environment.COTS license Describe the license type/status of each COTS
candidate, and address possible risks (e.g. schedule delay due to unable to get COTS license) related to COTS license issue in CAP section 4.2.
Procedures If execution test is performed, prepare the test procedures with respect to the Processes discussed in Section 3.2.3 of CAB.
4.1.3 COTS assessmentFollow the definition and description of COTS assessment process element in Section 3.4.1, identify the steps/tasks of COTS assessment performed in your project. For each task, identify the facilities, equipment, software, COTS licenses, and procedures involved in each type of assessment, particularly execution tests and prototype applications.
4.2 Complementary Activity4.2.1 Market Trend AnalysesIn COTS/NDI assessment project, market trend analysis is usually very important in order to properly address market considerations during the assessment. The goals of market trend analysis is to investigate into the market section of your particular domain and find out information such as [4]:
Standards related to your domain Standards based implementations and their attributes. Including business
attributes, that can help you solve your problem Vendors that are market leaders and their market shares Technologies for which products are widely available in the marketplace
and the business benefit experienced by the users of a given technology Which standards-based implementations work together and an assessment
of how well they work.
4.2.2 Product Line AnalysesIf the organization that initiates the COTS assessment project is aiming at making appropriate COTS choices that are compatible with the product line architecture,
146
product line analyses is also critical for a successful of COTS assessment activity. The evaluator must pay particular attentions and have insight to the following aspects [5]:
Qualification of potentially appropriate COTS and NDI components that are fit for use in the system
Adaptation of components through such means as wrappers, middleware, and other forms of software "glue"
Assembly of the adapted components to account for the interactions between the adapted components, the architecture, and the middleware
Updates to the COTS components when new versions are released; decisions about the fit with the current assets and the evolution strategy of the target system; and judgments about the vendor strategy, support, and long-term viability
4.3 Risk ManagementIdentify and prioritize the top few COTS/NDI assessment risks and plans for resolving them. Update the top-risk list at each progress reporting period. Appendix B provides a list of top risks and corresponding mitigations for COTS assessment projects.
5 Resources (how much)Summarize the estimated amount of effort, funding, and calendar time that will be required to plan, prepare, execute, analyze, document, review, and refine the assessments. Use and compare at least two bases of estimate (e.g., activity based costing and COCOTS), and identify key assumptions made in performing the estimates.
147
COTS/NDI Assessment Report (CAR) Document
PrefaceThe CAR document is reasonably self-contained, but relies on the CAB for detailed background on the project and organizational goals and environment. It also relies on the CAP for details on milestones, budgets, schedules, and risks. Its level of detail is risk-driven, particularly with respect to budgets, schedules and customer needs. Intermediate versions of the CAR should be produced for major milestones in the assessment process.
1 Executive SummarySummarize the high-level objectives and context of the COTS/NDI assessment, the major results, the conclusions, and the resulting recommendations. Keep the summary on a single page.
2 Purpose, Scope, and AssumptionsThe purpose of this document is to Summarize the COTS/NDI assessment process; Present the major COTS/NDI assessment results and conclusions; Make recommendations to the client based on the COTS/NDI assessment
results; of the [name of system], and Provide substantiation for the results.
The scope of this document covers the COTS/NDI assessment objectives, context, approach, results, conclusions, recommendations, and supporting data, plus [add any other significant topics covered].” Normally, other activities such as a market trend analysis will have separate reports; provide references to those here.
The following assumptions underlie the analysis, results, conclusions, and recommendations: [list assumptions]”
3 Assessment Approach3.1 System Objectives and ContextBriefly summarize the application system objectives, constraints, and priorities. Refer to CAB Sections 2, 3, and 4 for details.
3.2 Assessment Objectives and Approach148
Briefly summarize the COTS/NDI assessment objectives and approach. Elaborate somewhat on the Executive Summary, but refer to CAP Section 2.1and Section 4 for details.
4 Assessment ResultsThis section can be organized in different ways, depending on the critical assessment issues.
If the assessment involves several relatively independent and compatible types of COTS, primarily organizing by type of COTS works well.
If some assessments involve multiple rounds of downselecting among many COTS/NDI candidates, organizing by round of assessment (level of detail) works well.
If the assessment is dominated by several system-wide issues (performance, scalability, dependability, usability), organizing by issue works well.
Whatever the organization of this section, cover the following elements: Types of COTS/NDI products assessed; Candidate products assessed for each type; Assessment Criteria, Weights, and rating scales; Types of assessment used for each criterion (reference check, document
review, supplier inquiry, demo, analysis, execution test, prototype application).
Assessment Scenarios (as appropriate) Assessment Ratings and Summaries
4.x Assessment Results-Part X4.x.1 COTS candidates
Table B.48 Example of list of COTSsCOTS Product Web Address DescriptioneProject www. eproject .com Iplanet package www.iplanet.comblackboard Learn. usc.edu
4.x.1 Evaluation criteriaAn example set of evaluation criteria derived from USC COCOTS Assessment component’s attributes list is shown in the following table, the last column presents corresponding weight assigned based on discussion between the client and team members:
Table B.49 Example of evaluation criteriaNo. Evaluation Criteria – COTS attributes Weight1 Inter-component Compatibility 902 Product performance 1503 Functionality: 5004 Documentation Understandability 60
149
5 Flexibility: 806 Maturity of product 507 Vendor support 808 Security 1509 Ease of use 15010 Training 8011 Ease of installation/upgrade 6012 Ease of maintain 6013 Scalability: 10014 Vendor viability/stability 6015 Compatibility with USC IT infrastructure 10016 Evolution Ability 8017 Ease of Integration with third-party software 90
You should also describe the quantification method for scoring each criteria and explain briefly why.
The following table shows a template for the COTS product feature checklist. It breaks down the single criteria, “Functionality”, in the above table, into more details in order to better measure the functionality of a COTS product. It can also help doing gap analysis.
Table B.50 Example of evaluation criteria breakdownsWeight Features Weight Second Level Features
0.05 User authentication 0.025 User Login 0.025 New User Registration
0.1 Create new group 0.1 0.05 Group Page 0.05 0.15 chat room 0.03 Easy to use
0.03 Speed 0.03 Can the chat content be saved? 0.03 Support voice chat? 0.03 Can the chat content be printed out?
0.15 message Board 0.03 Have post/reply/delete function?
0.03Only authorized user can use delete function?
0.02 How many message at most? 0.05 Support query by name, date, or theme?
0.02Can the message/board content be printed out?
0.1 Calendar 0.025 Nice interface? 0.025 View calendar by month, week, and day? 0.025 Add/view/delete function? 0.025 Can the calendar content be printed out?
0.2Project Management 0.03 Has scheduling function?
0.03 Has change schedule function? 0.02 Has task notification function? (new task)
150
0.03 Has task progress report function? 0.03 Has task reminder function? (when login) 0.03 Has workflow control feature?
0.03Has support for printing out task report or what?
0.2 File Management 0.03 Has upload file function? 0.03 Has download file function? 0.03 Interface friendly? 0.03 Had Configuration management support? 0.02 Has new uploaded file notification function?
0.02Only authorized user can use delete function?
0.02 Maximum file size limitation? 0.02 Maximum storage space limitation? Total 1
4.x.3 Test procedure (Optional)Summarize the test procedures and test results performed on COTS based on business scenarios of the target system. If intensive COTS evaluation testing is performed, prepare an individual testing description document. Please refer to “Appendix C: COTS Evaluation Test Description Document” for example.
4.x.4 Evaluation Results Screen MatrixTable B.51 Example of evaluation screen matrix
Rate-1 Rate-2Rate-3Rate-4Rate-5AverageScore Rate-1 Rate-2Rate-3Rate-4Rate-5AverageScore Rate-1 Rate-2Rate-3Rate-4Rate-5AverageScore10 10 10 10 10 10 0.25 10 10 10 9 9 9.6 0.24 8 10 9 9 9 9 0.2310 10 10 10 10 10 0.25 10 10 10 10 9 9.8 0.25 8 9 9 9 9 8.8 0.22
9 10 10 10 10 9.8 0.98 9 7 8 8 8 8 0.8 8 8 0 8 8 6.4 0.6410 10 10 10 10 10 0.5 8 10 8 9 9 8.8 0.44 6 8 7 8 8 7.4 0.37
0 0 0 0 0 0 0 6 9 8 8 9 8 0.24 8 10 10 10 8 9.2 0.280 0 0 0 0 0 0 6 7 8 7 7 7 0.21 7 10 9 9 9 8.8 0.260 0 0 0 0 0 0 10 10 10 10 9 9.8 0.29 5 5 5 5 5 5 0.150 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 7 9 10 9 10 9 0.27 4 5 5 5 5 4.8 0.149 9 9 9 10 9.2 0.28 9 9 8 10 9 9 0.27 7 9 8 8 8 8 0.249 8 10 10 9 9.2 0.28 8 7 10 10 8 8.6 0.26 8 9 10 10 8 9 0.27
10 10 10 10 10 10 0.2 10 9 10 10 7 9.2 0.18 10 9 10 10 9 9.6 0.199 10 10 8 10 9.4 0.47 10 10 9 10 10 9.8 0.49 0 5 0 0 0 1 0.05
10 10 10 10 10 10 0.2 5 6 5 5 5 5.2 0.1 6 8 5 5 5 5.8 0.128 9 9 8 9 8.6 0.22 10 9 10 9 9 9.4 0.24 8 10 9 10 10 9.4 0.24
10 10 10 10 10 10 0.25 10 10 10 10 10 10 0.25 10 10 9 9 10 9.6 0.2410 10 9 9 10 9.6 0.24 10 10 10 10 10 10 0.25 9 10 9 10 9 9.4 0.2410 10 10 10 10 10 0.25 5 6 5 4 5 5 0.13 6 6 6 5 6 5.8 0.15
9 10 10 10 10 9.8 0.29 10 10 9 10 10 9.8 0.29 8 9 9 9 9 8.8 0.2610 10 10 10 10 10 0.3 10 10 9 10 10 9.8 0.29 7 10 8 8 9 8.4 0.25
9 10 10 10 10 9.8 0.2 9 10 10 9 9 9.4 0.19 9 9 10 10 7 9 0.189 10 10 9 10 9.6 0.29 10 10 9 10 10 9.8 0.29 8 9 9 8 9 8.6 0.26
10 10 10 9 9 9.6 0.29 3 0 0 0 0 0.6 0.02 0 0 0 0 0 0 08 8 10 8 9 8.6 0.26 7 6 5 7 9 6.8 0.2 9 9 6 7 9 8 0.24
10 10 10 10 10 10 0.3 7 8 8 8 9 8 0.24 5 7 5 6 5 5.6 0.1710 10 10 9 10 9.8 0.29 8 10 9 9 9 9 0.27 10 10 10 9 9 9.6 0.2910 10 10 10 10 10 0.3 9 10 9 10 9 9.4 0.28 10 10 10 10 10 10 0.3
9 9 10 9 10 9.4 0.28 7 9 8 8 9 8.2 0.25 9 9 9 10 9 9.2 0.289 10 10 8 10 9.4 0.28 4 7 0 3 2 3.2 0.1 10 8 10 9 10 9.4 0.28
10 10 10 10 10 10 0.2 10 9 8 10 9 9.2 0.18 10 8 10 10 8 9.2 0.1810 10 10 10 10 10 0.2 10 9 9 10 10 9.6 0.19 9 10 10 10 7 9.2 0.1810 10 10 10 10 10 0.2 6 8 8 10 9 8.2 0.16 7 8 8 10 7 8 0.1610 10 10 10 10 10 0.2 9 9 9 10 9 9.2 0.18 8 8 7 10 7 8 0.16
8.24 8.06 7.21
eRoomeProject eStudio
151
4.x.5 Business Case AnalysisEstablish a business case analysis for each COTS candidate/solution, including the following cost items:4.x.5.1 COTS Ownership Cost4.x.5.2 Development Cost4.x.5.3 Transition Cost4.x.5.4 Operational Cost4.x.5.5 Maintenance Cost
5. Conclusions and RecommendationsIn general, it is best to organize these into (conclusion-recommendation) pairs, for example:
Table B.52 Example of conclusion-recommendation pairsID Conclusion Recommendation1 ER Mapper has by far the best performance, but runs
only on Windows, failing the acceptable portability criterion. Mr SID is fully portable, and has acceptable performance.
Use Mr SID for the oversize image viewer function.
2 DBMS assessment is still underway, and Mr SID’s interoperability is still uncertain.
Perform an interoperability assessment between Mr SID and the two DBMS finalists.
152
Appendix C COCOTS Risk Analyzer Delphi
1. INTRODUCTIONThis document is designed to collect expert opinions on COCOTS Risk Analyzer, a model designed to identify and assess COTS integration risk patterns by using the cost factors in COCOTS model, in particular, the Glue Code sub-model.According to our latest research, using existing COCOTS cost drivers are closely relevant to the patterns of project risks. A risk scenario can be characterized as a combination of extreme cost driver values indicating increased effort with a potential for more problems. For example, one risk identification rule could be:
IF ((COTS Integrator Experience with Product< nominal)AND (COTS Product Interface Complexity> nominal))THEN there is a project risk.
We need your help because your intensive knowledge and experience can help us to identify a set of above risk identification rules based on COCOTS defined cost drivers.This document is organized to 3 main sections: Introduction (this section), Delphi Questions, and General Comments.Your responses to this survey will significantly affect the influence of the cost drivers and the overall risk taxonomy in COCOTS Risk Analyzer.For general information on COCOTS and Glue code model, please refer to Appendix A. For detailed information on COCOTS glue code cost driver explanations, please refer to Appendix B.1.1 Point of Contact:
Ye YangUSC Center for Software EngineeringSalvatori Hall Room 330941 West 37th PlaceLos Angeles, CA 90089-0781Email: yey @usc.edu Phone: (213) 740-6470Fax: (213) 740-4927
1.2 InstructionsStep 1:If you are unfamiliar with the COCOTS model, please read Appendix A carefully to build up the necessary background. It might take a few minutes depends on your experience in this field.Step 2:Please complete the Participant Information Section (Section 1.3).
153
Step 3:Please read over each question and based on your experience, answer each question. If you feel unable to make an information estimate, enter “DK” (Don’t know). Step 4:If you need feel you have any questions, or need additional information about the COCOTS Risk Analyzer, please feel free to contact Ye Yang at [email protected] Participant InformationYour Name _______________________Corporation name _______________________Location (City, State) _______________________Email address _______________________Phone _______________________Years of experience in Software Engineering __________Years of experience in risk management __________Years of experience in cost estimation __________Years of experience in COTS __________
154
2. Delphi QuestionsFourteen Cost Drivers have been defined for the COTS integration cost estimation model in COCOTS Glue Code sub-model. These drivers will be assessed while considering the overall amount of COTS glue code developed for the system, the Glue Code Size. Therefore, a total of 15 cost factors are defined as summarized in Table 1.
No. Name Definition1 Glue Code
SizeThe total amount of COTS glue code developed for the system.
2 AAREN Application Architectural Engineering3 ACIEP COTS Integrator Experience with Product4 ACIPC COTS Integrator Personnel Capability5 AXCIP Integrator Experience with COTS Integration Processes6 APCON Integrator Personnel Continuity7 ACPMT COTS Product Maturity8 ACSEW COTS Supplier Product Extension Willingness9 APCPX COTS Product Interface Complexity
10 ACPPS COTS Supplier Product Support11 ACPTD COTS Supplier Provided Training and Documentation12 ACREL Constraints on Application System/Subsystem Reliability13 AACPX Application Interface Complexity14 ACPER Constraints on COTS Technical Performance15 ASPRT Application System Portability
Table 1 – COCOTS Glue Code Submodel Cost DriversWe are interested in identifying risk situations that are described as a combination of extreme cost driver values, and then quantifying these risks depending on the cost drivers involved and their ratings.
Delphi Part I: Transforming Glue Code SizeAmong the 15 cost attributes, only the Glue Code Size is measured quantitatively in KSLOC. The rest 14 cost attributes are qualitatively measured using a 5-level rating scheme (i.e. from Very Low to Very High). Therefore, we need to discretize the continuous representations of SIZE into the same 5-level rating schema, for simplicity to model in our case, and for generality when compared the extreme ratings of SIZE and of any of the rest 14 cost attributes. In Table 2, please determine the appropriate breakups for glue code size by putting appropriate KSLOC numbers in the second row:
Rating Very Low Low Nominal High Very HighGlue Code Size
Table 2 Glue Code Size Rating RationaleQuestion
155
Except the Glue Code Size driver, do you think there are other factors that are also critical for capturing and measuring the amount/complexity/difficulty of glue coding?
Delphi Part II: Identifying Attributes Interaction Risk
As described previously, a risk situation can be described as a combination of some extreme cost attributes’ values (i.e., Very High for complexity or Very Low for integrator experience). An example matrix of initially identified risk combinations between COCOTS cost drivers is illustrated in Table 3. Each of the 15 cost attributes were examined one by one against the rest 14 attributes to detect any possible risky combinations when they are rated as their extreme ratings.
QuestionIn Table 4 on the following page, please identify potential risk situations what you believe exist between cost driver pairs when they are rated at their extreme ratings, by placing an “x” in the corresponding table entry. (Please note the differences between the cost driver orders in Table 3 and Table 4.)
Table 4 - Risk Situations Based on COCOTS
Table 3 Initial Risk Situations Identified
Cost Drivers SI
ZEAC
REL
APC
PXAC
IPC
ACPM
TAA
REN
AACP
XAC
IEP
ACPP
SAP
CO
NAX
CIP
ACPT
DAS
PRT
ACPE
RAC
SEW
#
SIZE x x x x x x x x 8ACREL x x x x x x x 7APCPX x x x x x x 6ACIPC x x x x x 5ACPMT x x x 3AAREN x x x 3AACPXACIEP x x x 3ACPPSAPCON x 1AXCIPACPTDASPRTACPERACSEW
36Total Risk Situations:
156
157
ASP
RT
AC
PER
AA
CPX
AC
REL
AC
PTD
AC
PPS
APC
PX
AC
SEW
AC
PMT
APC
ON
AXC
IP
AC
IPC
AC
IEP
AA
REN
SIZE
SIZE (Glue Code Size)AAREN (Application Architectural Engineering: How adequate/sophisticated were the techniques used to define and validate the overall systems architecture?)
ACIEP (COTS Integrator Experience with Product: How much experience does the developer have with running, integrating, and maintaining the COTS products?)
ACIPC (COTS Integrator Personnel Capability: What are the overall software development skillsand abilities which the team as a whole on COTSintegration tasks?)
AXCIP (Integrator Experience with COTSIntegration Processes: Does a formal andvalidated COTS integration process exist within theorganization and how experienced is the developerin that process?)
APCON (Integrator Personnel Continuity: Howstable is the integration team?)
ACPMT (COTS Product Maturity: How long havethe versions been on the market or available foruse? How large are the versions’ market shares orinstalled user bases?)
ACSEW (COTS Supplier Product ExtensionWillingness: How willing are the COTS vendors tomodify the design of their software to meet yourspecific needs?)
APCPX (COTS Product Interface Complexity: What are the nature of the interfaces between COTS and the glue code? Are there difficult synchronization issues?)
ACPPS (COTS Supplier Product Support: What is the nature of the technical support for the COTS components during the development?)
ACPTD (COTS Supplier Provided Training and Documentation: How much training or documentation for the COTS components is available during the development?)
ACREL (Constraints on ApplicationSystem/Subsystem Reliability: How severe arethe overall reliability constraints on the system?)
AACPX (Application Interface Complexity: Whatare the nature of the interfaces between the mainapplication system and the glue code? Are theredifficult synchronization issues?)
ACPER (Constraints on COTS TechnicalPerformance: How severe are the technicalperformance constraints on the application?)ASPRT (Application System Portability: What arethe overall system or subsystem portabilityrequirements that the COTS component neededto/must meet?)
158
3. General CommentsHow can we make this survey better?
Any other comments?
Thank you for your participation.
ReferencesBoehm, B. W., Abts, C., et al. (2000). Software Cost Estimation with COCOMO II, Prentice–Hall: Englewood Cliffs, NJ.Madachy, R. J. Heuristic Risk Assessment Using Cost Factors, IEEE Software, May/June 1997 (Vol. 14, No. 3), pp. 51-59.
159
Appendix D USC E-Services Project COTS Survey
End of Semester Survey on COTS Issues1. What is your individual role in your team:
2. Is your system largely dependent on COTS products? □ Yes □ No
If yes, please continue; if no, please go to question 8 on the second page.3. Please list the name of the COTS product(s), and for each COTS listed, estimate how many percents of your system requirements is covered by that COTS. For example: MySQL (30%).
4. Circle the COTS related activities that your project has performed:
a. Initial Filtering b. Detailed Assessment c. Tailoring d. Glue Code Development e. Market Analysis f. Vendor Contact g. Licensing Negotiation
If you are not clear about any of these activities, please list it below:
5. Has your team used the COTS Process Framework (as discussed in EC-22) in planning and performing your project tasks?
□ Yes □ NoIf no, please indicate the reason(s):
If yes, have you found the process framework helpful? □ Yes □ NoIf helpful, please check the following item where the framework is found useful:□ In preparing life cycle plan □ In updating life cycle plan□ In doing COTS assessment □ In doing COTS tailoring□ In developing glue code and integration with other system components□ In identifying project risk items and mitigation plan□ In responding to project changes and surprises
(Extra credit) Please indicate any comments on how the framework might be improved?
6. Please check the following items that you believe apply to your project:
No Maybe Slightly Significantly□ COTS delivers higher quality capabilities ○ ○ ○ ○□ COTS saves team development effort ○ ○ ○ ○□ COTS is hard to be integrated ○ ○ ○ ○□ COTS causes project schedule delay ○ ○ ○ ○□ COTS causes requirement changes ○ ○ ○ ○□ COTS causes architecture change ○ ○ ○ ○□ COTS does not cause any problem ○ ○ ○ ○□ COTS causes project surprises ○ ○ ○ ○
160
7. Did your team use the MS Project template for COTS Assessment Projects that is available on the class website?
If not, why?
(Extra credit) Please indicate any comments on how the templates might be improved?
Please note the questions below are only for teams not using a significant number of COTS packages.8. Please check the reason about why not using COTS in your project?
□ No satisfactory COTS available: please check and specify the following□ Satisfactory COTS available, but no budget available□ Tried COTS, but prefer custom development□ Never thought to use COTS from the very beginning□ Other reasons, please specify:
9. Has your team ever performed some COTS assessment activity even though you are currently doing a non-COTS development approach? □ Yes □ No
If yes, was the COTS unable to meet required system objectives? □ Yes □ No
If yes, please describe the conflicts between the system objectives and the candidate COTS(s):
Or, was the COTS unable to meet required system constraints? □ Yes □ No
If yes, please describe the conflicts between system constraints and the COTS(s):
Otherwise, please specify why the COTS does not fit in your project:
10. What would you expect if you were using some satisfactory COTS in your project compared to your current project progress?
□ COTS could deliver higher quality capabilities than custom development□ COTS could save team development effort□ COTS could cause project schedule delay□ COTS could cause requirement changes□ COTS could cause architecture change□ COTS will not meet system requirements
161