Coomo vimo vivir en vir en Un Un Ambiente Ambiente Seguro ...
ANALYSIS OF SOFTWARE COST ESTIMATION BASED ON THREE … · effort estimation models, such as...
Transcript of ANALYSIS OF SOFTWARE COST ESTIMATION BASED ON THREE … · effort estimation models, such as...
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 65
www.arseam.com
ANALYSIS OF SOFTWARE COST ESTIMATION
BASED ON THREE LAYERS OF METRICS
Zahid Ahsan M.Tech scholar
Al-Falah University, Dhauj,
Faridabad, Haryana
Saoud Sarwar (HoD, Deparment of computer science &
Engineering,Alfalah University Dhauj
Faridabaad)
Abstract:
Software development effort estimation is the process of forecasting the software effort to
estimate software costs of both development and maintenance. Such estimates may be to
analyze project investment. Software researchers and practitioners have provided effort
estimation for several decades. Most of them focused on the construction of formal software
effort estimation models, such as Putnam’s SLIM, COOMO 81, COOMO II, COCOTS, Kemerer
and Albrecht-Gaffney, that can be applied to measure on different project scales (small, medium,
and large). We employed five software projects ranging from small (less than 10,000 SLOC)to
medium (10,000 – 100,000 SLOC) scale for estimating software effort by means of the
aforementioned estimation models. Conventional and object-oriented metrics were used to
determine the relationship among metrics, efforts, and project size. All the above Models are
based on Two layer Architecture of Software Cost Estimation.
“The single most important task of a project: setting realistic expectations.”
The New Enhanced Three layer Architecture of Software Cost Estimation is to tune the
parameters of the COstructive Cost MOdel (COCOMO) and Function point method which is
mostly used in corpoarte for estimation of Cost and Schedule.
Key Word: Product Attributes”, “Hardware Attributes”, “Personnel Attributes”, “Project
Attributes
Process Model of the Estimation
The assertion of this work is that estimation of the amount of effort required for the
development of a project. Software cost estimation process is a set of techniques and
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 66
procedures that is used to derive the software cost estimate but it has many failure and causes
the Software Crisis
Software Project Estimation
The four basic steps in software project estimation are:
1) Estimate the size of the development product. This generally ends up in either Lines of
Code (LOC) or Function Points (FP), but there are other possible units of measure. A
discussion of the pros & cons of each is discussed in some of- the material referenced at the
end of this report.
2) Estimate the schedule in calendar months.
The ability of a company to compete effectively in the increasingly competitive global market
is influenced to a large extent by cost [1]. Cost is the expenditure necessary for the attainment
of a goal;
Costing and pricing:
1. Estimates are made to discover the cost, to the developer, of producing a software
system.
2. There is not a simple relationship between the development cost and the price charged
to the customer.
Advantages: You get the contract
Disadvantages: The probability that the customer gets the system he or she wants is small.
Costs do not accurately reflect the work required.
How do you know what customer has?
1.4.1. Cost Models
Early cost model were linear but it has been shown there is no clear linear relationship between
effort and size.
Later cost models were generally based on the following non-linear formula:
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 67
E = (a + b*(SIZEc)) * f(x1 . . . , xn)
Base formula correction, (depends on the value of entities (x1, . . . xn)
where E = effort
a, b and c are derived constants and
x1 to xn are influencing factors which vary from project to project.
Boehm's CoCoMo
There are three forms of the COnstructive COst MOdel :
Basic CoCoMo which gives an initial rough estimate of man months and development time,
Intermediate CoCoMo which gives a more detailed estimate for small to medium sized
projects,
Detailled CoCoMo which gives a more detailed estimate for large projects.
Basic CoCoMo
Calculates a cost estimate based on project size using the following equations:
Table 1.4.2.1.1 – Basic COCOMO Data
Organic mode Semi-detached mode Embedded mode
MMd = 2.4(KDSI)1.05
MMd = 3.0(KDSI)1.12
MMd = 3.6(KDSI) 1. 20
MMam =1.0 (ACT) (MMd)
MMam = 1.0 (ACT) (MMd)
MMam =1.0 (ACT) (MMd)
Where, MMd: the time, in man-months, to develop the software.
MMam:the time, in man-months, to maintain the software
TDEV:development schedule in months
KDSI:number of thousands of delivered source instructions
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 68
ACT:annual change traffic, i.e. the average fraction of a product's source
instructions that are changed in one year.
Point of Estimation And Discription
Derived Method is not suitable for atypical projects. They are Calibrated to past not future.
They disregard components not included in the model.
In Delphi or Shang Technique, an expert knows what the right value is but is unable to explain
why it is so. Thus here also certain problem occurs.
This method cannot be quantified.
It is hard to document the factors used by the experts or experts-group.
Proposed Solution
The Three-point estimation technique, is used to improve direct estimation, when more values
are provided by estimators. It is one of the general estimating methods that helps project
managers produce better estimates.
Three Point Estimation
This method is used when more values are provided by estimators. This is a technique to
improve direct estimation. It is one of the general estimating methods that helps project
managers produce better estimates.
Three Point Estimation Technique
A direct estimation is improved by the use of three-point estimation technique , this is a
technique to improve direct estimation , when more values are provided by estimators.
• Most likely (approx. realistic scenario): The duration of the schedule activity, given the
resources likely to be assigned, their productivity, realistic expectations of availability for the
schedule activity, dependencies on other participants, and interruptions. • Pessimistic (worst-
case scenario): The activity duration is based on a worst-case
scenario of what is described in the most likely estimate, where everything goes wrong.
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 69
Given the Minimum, the Most Likely, and the Maximum Value for the size, the activity
duration estimate can be constructed, which is:
E = (Min + 4×MostLikely + Max) / 6
with standard deviation:
σ = (Max - Min) / 6 Where,
E is a weighted average and SD measures the variability or uncertainty in the estimate.
The Three-Tier-Technique can be understood by through PERT (Program, Evaluation, and
Review Technique) for Duration values.
Critical Path Method: The longest possible continuous pathway taken from the initial event to
the terminal event. It determines the total calendar time required for the project; and, therefore,
any time delays along the critical path will delay the reaching of the terminal event by at least the
same amount.
The essential technique for using CPM is to construct a model of the project that includes the
following:
A list of all activities required to complete the project (typically categorized within a work
breakdown structure),
The time (duration) that each activity will take to complete,
The dependencies between the activities and,
Figure 2.4.2.1 – Critical Path Schedule.
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 70
Critical Activity: An activity that has total float equal to zero. An activity with zero float is
not necessarily on the critical path since its path may not be the longest.
Lead Time: the time by which a predecessor event must be completed in order to allow
sufficient time for the activities that must elapse before a specific PERT event reaches
completion.
lag time: the earliest time by which a successor event can follow a specific PERT event.
Cost estimation techniques
Algorithmic cost modelling: A model is developed using historical cost information that
relates some software metric (usually its size) to the project cost. An estimate is made of that
metric and the model predicts the effort required [16].
Expert judgement: Several experts on the proposed software development techniques and the
application domain are consulted.
Estimation by analogy: This technique is applicable when other projects in the same
application domain have been completed.
Parkinson’s Law: Parkinson’s Law states that work expands to fill the time available. The
cost is determined by available resources rather than by objective assessment.
Pricing to win: The software cost is estimated to be whatever the customer has available to
spend on the project. The estimated effort depends on the customer’s budget and not on the
software functionality.
The COCOMO model:
A number of algorithmic models have been proposed as the basis for estimating the effort,
schedule and costs of a software project. These are conceptually similar but use different
parameter values. The model that I discuss here is the COCOMO model. The COCOMO
model is an empirical model that was derived by collecting data from a large number of
software projects. The first level (basic) provided an initial rough estimate; the second level
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 71
modified this using a number of project and process multipliers; and the most detailed level
produced estimates for different phases of the project.
The sub-models [16] that are part of the COCOMO II model are:
1. An application-composition model: This assumes that systems are created from reusable
components, scripting or database programming. It is designed to make estimates of prototype
development.
2. An early design model: This model is used during early stages of the system design after the
requirements have been established. Estimates are based on function points, which are then
converted to number of lines of source code.
3. A reuse model: This model is used to compute the effort required to integrate reusable
components and/or program code that is automatically generated by design or program
translation tools. It is usually used in conjunction with the post-architecture model.
4. A post-architecture model: Once the system architecture has been designed, a more
accurate estimate of the software size can be made
Software sizing
Software size is a key input to any estimating model and across most software parametric
models. Supported sizing metrics include source lines of code (SLOC), function points,
function-based sizing (FBS) and a range of other measures. They are translated for internal use
into effective size (Se). Se is a form of common currency within the model and enables new,
reused, and even commercial off-the-shelf code to be mixed for an integrated analysis of the
software development process. The generic calculation for Se is:
𝑆𝑒 = 𝑁𝑒𝑤𝑆𝑖𝑧𝑒 + 𝐸𝑥𝑖𝑠𝑡𝑖𝑛𝑔𝑆𝑖𝑧𝑒 × (0.4 × 𝑅𝑒𝑑𝑒𝑠𝑖𝑔𝑛 + 0.25 × 𝑅𝑒𝑖𝑚𝑝𝑙 + 0.35 ×
𝑅𝑒𝑡𝑒𝑠𝑡)(6)
As indicated, Se increases in direct proportion to the amount of new software being developed.
Seincreases by a lesser amount as preexisting code is reused in a project. The extent of this
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 72
increase is governed by the amount of rework (redesign, re-implementation, and retest)
required to reuse the code.
Function based sizing: While SLOC is an accepted way of measuring the absolute size of
code from the developer's perspective, metrics such as function points capture software size
functionally from the user's perspective. The function-based sizing (FBS) metric extends
function points so that hidden parts of software such as complex algorithms can be sized more
readily. FBS is translated directly into unadjusted function points (UFP). In SEER-SEM, all size
metrics are translated to Se, including those entered using FBS.
𝑆𝑒 = 𝐿𝑥 × (𝐴𝑑𝑗𝐹𝑎𝑐𝑡𝑜𝑟 × 𝑈𝐹𝑃)𝐸𝑛𝑡𝑟𝑜𝑝𝑦 /1.2--------------------------------
Where,
Lx is a language-dependent expansion factor.
AdjFactor is the outcome of calculations involving other factors mentioned above.
Entropy ranges from 1.04 to 1.2 depending on the type of software being developed.
Effort and duration calculation
A project's effort and duration are interrelated, as is reflected in their calculation within the
model. Effort drives duration, notwithstanding productivity-related feedback between duration
constraints and effort. The basic effort equation is:
𝐾 = 𝐷0.4 × (𝑆𝑒
𝐶𝑡𝑒)E
-------------------------------------- (8)
Where,
is effective size - introduced earlier
is effective technology - a composite metric that captures factors relating to the
efficiency or productivity with which development can be carried out. An extensive set of
people, process, and product parameters feed into the effective technology rating. A higher
rating means that development will be more productive
is staffing complexity - a rating of the project's inherent difficulty in terms of the rate
at which staff are added to a project.
is the entropy - In days gone by entropy was fixed at 1.2. Next it evolved to 1.04 to 1.2
depending on project attributes with smaller IT oriented projects tending toward the lower.
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 73
Currently entropy is observed as 1.0 to 1.2 depending on project attributes. SEER will allow an
entropy less than 1.0 if such a circumstance is observed as well.
Once effort is obtained, duration is solved using the following equation:
𝑡𝑑 = 𝐷−0.2 × (𝑆𝑒
𝐶𝑡𝑒)0.4 --------- (9)
The duration equation is derived from key formulaic relationships. Its 0.4 exponent indicates
that as a project's size increases, duration also increases, though less than proportionally
“Full Function Points (FFP)” method was developed in 1997 [14]. It was a research project by
the University of Quebec in cooperation with the Software Engineering Laboratory in Applied
Metrics (SELAM). FFP method aimed to cover the area of real-time and embedded systems in
addition to data strong systems by adding new data and transaction function types for
measuring the real-time components. COSMIC FFP method, which is the second version of
FFP, was published by Common Software Measurement International Consortium (COSMIC)
in November 1999 [27]. This group has been established to develop this new method as a
standardized one which would measure the functional size of software for business information
systems, real-timesystems and hybrids of both. COSMIC FFP v.2.2 has been approved as being
conformant to ISO/IEC 14143 and become an international ISO standard in 2003 [29]. Finnish
Software Measurement Association (FiSMA) FSM method is developed by a working group of
FiSMA [38]. It is designed to be applicable to all types of software. Similar to other methods
based on “functionality”, FiSMA FSM is also based on functional user needs. The difference is
that, FiSMA FSM is service-oriented instead of process-oriented. The importance of being able
to estimate size of software earlier in the development life cycle has long been realized. In this
context, Early Function Point Analysis (EFPA) technique was developed and subsequently
refined [29], [40]. In 2000, Early & Quick COSMIC-FFP [41] was designed by the same
research group to estimate the functional size of a wide range of software. In 2004, release 2.0
of these two early methods; are published [32]. Various approaches have been developed to
adapt FPA to the needs of object-oriented (OO) software development. A widely referenced
method is Object Points (OP) Method developed at the Leonard N. Stern School of Business,
New York University [33]. The concepts underlying this method are very similar to that of
FPA, except that objects, instead of functions, are being counted [24]. Other methods can be
listed as Object Oriented Function Points Method [35], Predictive Object Points (POPs)
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 74
method [36] and OO-Method Function Points (OOmFP) [37]. In 1996, the International
Standards Organization started a working group (ISO/IEC JTC1 SC7 WG12) on FSM to
establish common principles of the methods based on “functionality”. They first published the
first part of this standard [25], which defines the fundamental concepts of FSM such as
“Functional Size”, “Base Functional Components (BFC)”,“BFC Types”, the FSM method
characteristics and requirements that should be met by a candidate method to be conformant to
this standard. The standard promoted the consistent interpretation of FSM principles. After
that, IEEE Std. 14143.1, which is an adoption of ISO/IEC 14143-1:1998, was published [28].
Four more parts of ISO/IEC 14143, which are ISO/IEC 14143-2 [26]; ISO/IEC TR 14143-3
[27]; ISO/IEC TR 14143-4 [28]; ISO/IEC TR 14143-5 [39], were published in the following
years (see Table 1). Currently, four methods have been certified by ISO to become an
international standard; Mk II FPA v.1.3.1 [2], IFPUG FPA v.4.1 [18], COSMIC FFP v.2.2
[29], and NESMA FP v.2.2 [30].
Feature points
Caper Jones published method based closely on that of Albrecht called feature points with the
aim of extending FSM to scientific algorithm. This method has been largely abandoned due to
the intrinsic difficulty of sizing mathematical algorithms. It is an adaptation of FPA introduced
by software productivity research in order to measure software size as well as business
information systems.
FPA
3D FPA MK II Feature
Points
IFPUG 3.0 NESMA
MK II
1.3 IFPUG
4.0
NESMA
2.1
FFP V 1.0 IFPUG
4.1
IFPUG
UNADJUSTE
D IFPUG
4.2
MK II COSMIC
FFP 2.2
COSMIC
3.0
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 75
. Fuzzy logic
The concept of fuzzy logic was given by Lotfi A. Zadeh in the year of 1965. In order to
calculate vague and imprecise queries fuzzy logic is used. It is a multi-valued logic that
uses different values between interval [0, 1]. According to Zadeh fuzzy set is defined as:
“In universe of discourse Ux, a fuzzy subset A of Ux is characterised by a membership
function f A(x) where f(A):Ux [0, 1]”. Fuzzy membership function associates with each
member of X of Ux of a number of f(x) in the interval [0, 1], represents degree of
membership function of X in A. Linguistic variables are words whose values are imprecise
e.g., very low, low, average, high, very high etc. To represent linguistic variables we use
fuzzy numbers. Notions like rather tall or very fast can be formulated mathematically and
processed by computers, in order to apply a more human-like way of thinking in the
programming of computers [32]. Fuzzy systems is an alternative to traditional notions of
set membership and logic that has its origins in ancient Greek philosophy [32]. The
precision of mathematics owes its success in large part to the efforts of Aristotle and the
philosophers who preceded him. In their efforts to devise a concise theory of logic, and
later mathematics, the so-called ”Laws of Thought” were posited [32]. One of these, the
”Law of the Excluded Middle,” states that every
6.3. Triangular membership function (TFN)
Let A1 = (c1, a1, b1) and A2 = (c2, a2, b2) be two triangular fuzzy numbers as Fig. 1. The addition
of A1 and A2 at h-level is:
𝐴1 ⨁ 𝐴2 = 𝐿𝐴1 −1 + 𝐿𝐴2()
−1 , 𝐿𝐴1()−1 + 𝑅𝐴2
−1 , 𝑅𝐴1 −1 + 𝐿𝐴2()
−1 , 𝑅𝐴1()−1 + 𝑅𝐴2()
−1
𝐿𝐴1 and 𝑅𝐴1 are the functions L and R of fuzzy number A1, respectively. 𝐿𝐴1 −1 and 𝑅𝐴1
−1 are
the inverse functions of functions 𝐿𝐴1 and 𝑅𝐴1 at h-level, respectively. 𝐿𝐴2 and 𝑅𝐴2 are the
functions L and R of fuzzy number A2, respectively. 𝐿𝐴2 −1 and 𝑅𝐴2
−1 are the inverse
functions of functions 𝐿𝐴2 and 𝑅𝐴2 at h-level, respectively [32].
Suppose the membership functions of A1 = (c1, a1, b1) is
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 76
𝑓𝐴1 𝑥 =
𝑥−𝑐1
𝑎1−𝑐1 , 𝑐1 ≤ 𝑥 ≤ 𝑎1,
𝑥−𝑏1
𝑎1−𝑏1 , 𝑎1 ≤ 𝑥 ≤ 𝑏1 ,
0 , 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
-------------------------------------------(14)
Since
𝐿𝐴1 𝑥 = 𝑥 − 𝑐1
𝑎1 − 𝑐1 , 𝑐1 ≤ 𝑥 ≤ 𝑎1,
𝑅𝐴1 𝑥 = 𝑥 − 𝑏1
𝑎1 − 𝑏1 , 𝑎1 ≤ 𝑥 ≤ 𝑏1,
and
𝐿𝐴1 −1 = 𝑐1 + 𝑎1 − 𝑐1 , 0 ≤ ≤ 1,
𝑅𝐴1 −1 = 𝑏1 + 𝑎1 − 𝑏1 , 0 ≤ ≤ 1,
Similarly, suppose the membership function of A2 = (c2, a2, b2) is
𝑓𝐴2 𝑥 =
𝑥 − 𝑐2
𝑎2 − 𝑐2 , 𝑐2 ≤ 𝑥 ≤ 𝑎2,
𝑥 − 𝑏2
𝑎2 − 𝑏2 , 𝑎2 ≤ 𝑥 ≤ 𝑏2 ,
0 , 𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒.
Since
𝐿𝐴2 𝑥 = 𝑥 − 𝑐2
𝑎2 − 𝑐2 , 𝑐2 ≤ 𝑥 ≤ 𝑎2 ,
𝑅𝐴2 𝑥 = 𝑥 − 𝑏2
𝑎2 − 𝑏2 , 𝑎2 ≤ 𝑥 ≤ 𝑏2 ,
In our project there is fuzzy assessment of values assigned to the function points by the decision
makers. There are many stakeholders involved in the process of software project development
who acts as decision makers [6]. We rate linguistic variables i.e., very low, low, average, high
and very high on the scale of 0 to 1 and derive weight using the formula of graded mean
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 77
integration given in equation 1. Suppose there are three stakeholders are involved who act as
decision makers i.e., programmer (DM1), developer (DM2) and user (DM3). Each of them rate
value of function point as very low, low, average, high or very high
Here, average of each fuzzy linguistic variable is calculated and formula of graded mean
integration is applied on these values as shown below.
AVVL = 0.246
AVL = 0.337
AA = 0.416
W = 1
6 𝑐 + 4𝑎 + 𝑏
W = 0.335
Where,
AVVL is average value of linguistic variable Very Low
AVL is average value of linguistic variable Low
AA is average value of linguistic variable Average
W is weight of given linguistic variables
On applying the above formulas to remaining values, we obtained the following results:
Table 6.4.2 : Table showing calculated weights for each FPA attributes
Calculation of Fuzzy General System Characteristics
The fourteen general system characteristics are also rated as very low, low, average, high and
very high by the decision makers. To calculate fuzzy weight of each GSC we use triangular
FPA Attributes DM1 DM2 DM3 Weights
External Input VL L A 0.335
External Output L A L 0.398
External Query A H A 0.566
Internal Logical Files H VH H 0.740
External Interface Files H A A 0.849
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 78
membership function and graded mean integration as discussed earlier. Three decision makers
DM1, DM2, DM3 rate Data communication as VL, L and VL, to calculate the weight we use
TFN as follows:
GSC1- Data Communication:
VL 0.00 0.191 0.255
L 0.255 0.321 0.495
VL 0.00 0.191 0.255
Here average of each value is calculated, and using average values weight is calculated.
AVL = 0.085
AL = 0.234
AVL = 0.335
W = 1
6 𝑐 + 4𝑎 + 𝑏
W = 0.226
Where,
AVL average of linguistic variable Very Low
AL average of linguistic variable Low
W is weight
GSC2- Distributed Data Processing:
L 0.255 0.321 0.495
A 0.495 0.501 0.599
L 0.255 0.321 0.495
Here average of each value is calculated, and using average values weight is calculated as
follows:
AL = 0.335
AA = 0.381
AL = 0.529
W = 1
6 𝑐 + 4𝑎 + 𝑏
W = 0.398
Where,
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 79
AL average of linguistic variable Low
AA average of linguistic variable Average
AL average of linguistic variable Low
W is weight.
Table 6.5.1 - Fourteen General System Characteristics with their weights.
S.NO 14 GSC DM1 DM2 DM3 Weights
1 Data
Communication
VL L VL 0.226
2 Distributed Data
Processing
L A L 0.390
3 Performance H VH H 0.740
4 Heavily Used
Configured
L A A 0.457
5 Transition Rate H H A 0.616
6 Online Data Entry L VL L 0.282
7 End User
Efficiency
H A H 0.724
8 Online Update L L VL 0.282
9 Complex
processing
A A L 0.457
10 Reusability VH VH H 0.813
11 Installation Ease A A H 0.566
12 Multiple Site A H A 0.566
13 Facilitate Change VH H H 0.740
14 Operational Ease L VL VL 0.225
𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 𝐴𝑑𝑗𝑢𝑠𝑡𝑚𝑒𝑛𝑡 𝐹𝑎𝑐𝑡𝑜𝑟 (𝐶𝐴𝐹) = 0.65 + 0.01 × ∑ (14 𝐺𝑆𝐶 ) -------(2)
CONCLUSION
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 80
The Two layer Architecture of Software Cost estimation are not fesiable to calculate the Cost
and effort of any Software Project, so My research is completely focused on the Three Layers
Based Software Cost Estimation.
COCOMO and Function Point models are based on Two layer Architecture of Software Cost
estimation and that is not fesiable to calculate the cost or schedule the Project development
based on Software Developmen t Life cycle is not fesiable and the data collected from the
15 Top MNCs during my research are having a big margin of Actual Cost and the cost
calculated from the COCOMO and Function Point models.
I have tried to develop a model for Three Layers Based Software Cost Estimation based on
Influence Factor Redesign Optimization (IFRO) to tune the parameters of the COstructive
COstMOdel (COCOMO). It is integrated with Influence Factor Function (IFF) methodology to
assist decision making in software designing anddevelopment processes for achieving the exact
cost of the project.
This Model will help the project managers to efficiently plan the overall software development
life cycle of the software with accurate cost and schedule estimation.
Data on 15 large completed business projects data were collected and used to test the accuracy
of the models and post effort estimation.
The input are feed in the program and set the variable and influence factors based on the
Project Complexity and calculated the result and it approaches the accuracy till 93% from the
actual cost of the developed project after adding the third layer.
There is major relation and impact based on accurate estimates of effort, cost, and schedule or
timeline or delivery get failed, exceed budget and go overscheduled or bottleneck on work
environment.
I had added the third Layer based on Three Layers Software Cost Estimation and the
diffreneces from actual cost and the dereived cost from the Three Layers Based Software Cost
Estimation meets the acuracy.
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 81
Software Effort and Cost estimation is the set of techniques and procedures that organize an
estimate for proposal bidding, project planning and probability estimates.
Accurate Effort and Cost estimation means better planning and efficient use of project
resources such as cost, duration and effort requirements for software projects.
Three Layers Based Software Cost Estimation methodology with an enhanced approach, focus
on type of the domain of the Software Projects. The assessment of a Project based on Three
Layers should be the first step for a Project manager before going to find the Cost or effort of
the project.
Responsibility of the project manager is to have accurate estimates of effort, cost, and schedule
involved in software development.
My approch meets the goal of any corpoarte Organization to achive the best schedule and cost
finding process for project manager, have accurate estimates of effort, cost, and schedule
involved in software development.
REFERENCES
1. B.W. Boehm et al. "Cost Models for Future Software Life Cycle Processes: COCOMO
2.0." Annals of Software Engineering on Software Process and Product Measurement,
Amsterdam, 1995.
2. C. J. Burgess and M. Lefley. "Can Genetic Programming Improve Software Effort
Estimation?" Information and Software Technology, vol. 43, 2001, pp. 863-873.
3. Goether, Wolfhart B., Elizabeth K. Bailey, Mary B. Busby, Software Effort and Schedule
Measurement: A framework for counting Staff-hours and reporting Schedule
Information, CMU/SEI-92-TR-021, 1992
4. Humphrey, Watts, A Discipline for Software Engineering, Addison-Wesley, 1995
5. Dreger, Brian, Function Point Analysis, Prentice-Hall, 1989
6. Garmus, David & David Herron, Measuring the Software Process, Yourdon Press, 1996
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 82
7. Jones, Capers, Applied Software Measurement: Assuring Productivity and Quality,
McGraw-Hill, 1991
8. Jones, Capers, What are Function Points, 1997, http://www.spr.com/library/0funcmet.htm
9. International Function Point Users Group (IFPUG) web site: http://www.ifpug.org
A function point FAQ: http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm
10. C.F. Kemerer, “An empirical validation of software cost estimation models”,
Communications of the ACM, vol. 30, no. 5, May 1987, pp. 416-429.
11. S. S. Vicinanza, T. Mukhopadhyay, and M. J. Prietula, “Software-effort estimation: an
exploratory study of expert performance”, Information Systems Research, vol. 2, no. 4,
Dec.1991, pp. 243-262.
12. D. V. Ferens, and R. B. Gumer, “An evaluation of three function point models of
estimation of software effort”, IEEE National Aerospace and Electronics Conference, vol. 2,
1992, pp. 625- 642.
13. G. R. Finnie, G. E. Wittig, AI tools for software development effort estimation, Software
Engineering and Education and Practice Conference, IEEE Computer Society Press, pp.
346-353, 1996.
14. DeMarco, Tom, Controlling Software Projects, Prentice-Hall, 1982
15. F. J. Heemstra, “Software cost estimation”, Information and Software Technology, vol. 34,
no.
16. D. R. Jeffery, and G. C. Low, “Calibrating estimation tools for software development”,
Software Engineering Journal, July 1990, pp. 215-221.
17. J. Hihn and H. Habib-Agahi, “Cost estimation of software intensive projects: a survey of
current practices”, International Conference on Software Engineering, 1991, pp. 276-287.
18. D. R. Jeffery, G. C. Low, and M. Barnes, “A comparison of function point counting
techniques”, IEEE Trans on Soft. Eng., vol. 19, no. 5, 1993, pp. 529-532.
International Journal of Advances in Engineering & Scientific Research, Vol.3, Issue 3, Jul - 2016,
pp 65-84 ISSN: 2349 –3607 (Online) , ISSN: 2349 –4824 (Print)
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 83
19. M. Shepperd and C. Schofield, “Estimating software project effort using analogy”, IEEE
Trans. Soft. Eng. SE-23:12, 1997, pp. 736-743.
20. L. C. Briand, K. El Eman, F. Bomarius, “COBRA: A hybrid method for software cost
estimation, benchmarking, and risk assessment”, International conference on software
engineering, 1998, pp. 390-399.
21. T. Mukhopadhyay, S. S. Vicinanza, and M. J. Prietula, “Examining the feasibility.
22. K. Srinivasan and D. Fisher, “Machine learning approaches to estimating software
development effort”, IEEE Trans. Soft. Eng., vol. 21, no. 2, Feb. 1995, pp. 126-137.
23.Goether, Wolfhart B., Elizabeth K. Bailey, Mary B. Busby, Software Effort and Schedule
Measurement: A framework for counting Staff-hours and reporting Schedule Information,
CMU/SEI-92-TR-021, 1992
24. Humphrey, Watts, A Discipline for Software Engineering, Addison-Wesley, 1995
25. Dreger, Brian, Function Point Analysis, Prentice-Hall, 1989
26. Garmus, David & David Herron, Measuring the Software Process, Yourdon Press, 1996
27. Jones, Capers, Applied Software Measurement: Assuring Productivity and Quality,
McGraw-Hill, 1991
28. Jones, Capers, What are Function Points, 1997, http://www.spr.com/library/0funcmet.htm
29. International Function Point Users Group (IFPUG) web site: http://www.ifpug.org
A function point FAQ: http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm
30. C.F. Kemerer, “An empirical validation of software cost estimation models”,
Communications of the ACM, vol. 30, no. 5, May 1987, pp. 416-429.
31. S. S. Vicinanza, T. Mukhopadhyay, and M. J. Prietula, “Software-effort estimation: an
exploratory study of expert performance”, Information Systems Research, vol. 2, no. 4,
Dec.1991, pp. 243-262.
Zahid A & Saoud S / Analysis of Software Cost Estimation Based on Three Layers of Metrics
Contact Us : [email protected] ; submit paper : [email protected] download full paper : www.arseam.com 84
32. D. V. Ferens, and R. B. Gumer, “An evaluation of three function point models of
estimation of software effort”, IEEE National Aerospace and Electronics Conference, vol. 2,
1992, pp. 625- 642.
33. G. R. Finnie, G. E. Wittig, AI tools for software development effort estimation, Software
Engineering and Education and Practice Conference, IEEE Computer Society Press, pp. 346-
353, 1996.
34. DeMarco, Tom, Controlling Software Projects, Prentice-Hall, 1982
35. F. J. Heemstra, “Software cost estimation”, Information and Software Technology, vol. 34,
no.
36. D. R. Jeffery, and G. C. Low, “Calibrating estimation tools for software development”,
Software Engineering Journal, July 1990, pp. 215-221.
37. J. Hihn and H. Habib-Agahi, “Cost estimation of software intensive projects: a survey of
current practices”, International Conference on Software Engineering, 1991, pp. 276-287.
38. D. R. Jeffery, G. C. Low, and M. Barnes, “A comparison of function point counting
techniques”, IEEE Trans on Soft. Eng., vol. 19, no. 5, 1993, pp. 529-532.
39. M. Shepperd and C. Schofield, “Estimating software project effort using analogy”, IEEE
Trans. Soft. Eng. SE-23:12, 1997, pp. 736-743.
40. L. C. Briand, K. El Eman, F. Bomarius, “COBRA: A hybrid method for software cost
estimation, benchmarking, and risk assessment”, International conference on software
engineering, 1998, pp. 390-399.
41. T. Mukhopadhyay, S. S. Vicinanza, and M. J. Prietula, “Examining the feasibility of a case
based reasoning model for software effort estimation”, MIS Quarterly, vol. 16, no. 2, June
1992, pp. 155-172.
42. K. Srinivasan and D. Fisher, “Machine learning approaches to estimating software
development effort”, IEEE Trans. Soft. Eng., vol. 21, no. 2, Feb. 1995, pp. 126-137.