APPENDIX A. Literature Review Resources
Various literature resources were reviewed related to object-oriented approach,
maintainability, usability and software quality models in systematic manner by
following the guidelines of Kitchenham et al. (2004). Only those sources included in
the review process, which showed some direct evidence related to research objectives
described in Chapter 1. The search strategy was based on the inclusion of conferences,
journals, standards, models, technical reports and books. Following Table represents
major online resources referred for review.
Table A.1: Online Resources for Literature Review
SOURCES
SPECIFIC RESOURCES
Digital Libraries /
Electronic Databases
IEEExplore (ieeexplore.ieee.org)
Springer Link (springerlink.com)
CiteSeer (citeseer.ist.psu.edu)
ACM digital library (portal.acm.org)
Scopus (scopus.com)
DBLP Computer Science Bibliography (dblp.uni-trier.de)
Taylor and Francis (taylorandfrancis.com)
Google (scholor.google.com)
Electronic Journals
IEEE Transactions on Software Engineering
IEEE Software
IEEE Computer
Communications of the ACM
ACM SIGSOFT Software Engineering Notes
Software Quality Journal (Springer)
Empirical Software Engineering (Springer)
Journal of Systems and Software (Elsevier)
APPENDIX B. Comparison of Object Oriented Metrics
Cla
ss
Cou
pli
ng
Coh
esio
n
Inh
erit
an
ce
Siz
e
Poly
morp
his
m
Ref
eren
ces
CC AMC Lyu (1992)
DAC, DAC’,
MPC
Li and Henry
(1993)
WMC
CBO, RFC LCOM, DIT,
NOC
Chidamber
and Kemerer
(1994)
NOD,
NOP
NMA Lake and
Cook (1994)
CS,
NOO,
NOA
NMO,
NMINH,
SIX,
NMI
NIM,
NCM,
NPAVG,
NMA
Lorenz and
Kidd (1994)
CF AHF,
MHF
MIF, AIF PF Abreu (1995,
1996)
LCC,
TCC
Bieman and
Kang (1995)
LCOM3,
LCOM4,
CO
Hitz and
Montazeri
(1995)
ICP, IH-ICP,
NIH-ICP
ICH Lee et al.
(1995)
NOM MPC, CTA,
CTM
Li et al.
(1995)
NOD,
NOA
Tegarden et
al. (1995)
LCOM5 AID Henderson
and Sellers
(1996)
CDM CHNL Binkley and
Schach
(1998)
IFCAIC,
ACAIC,
OCAIC,
FCAEC,
DCAEC,
OCAEC,
IFCMIC,
ACMIC,
OCMIC,
FCMEC,
DCMEC,
OCMEC,
IFMMIC,
AMMIC,
OMMIC,
FMMEC,
DMMEC,
OMMEC
NAI Briand et al.
(1998)
NAS Harrison et
al. (1998)
NAC,
NLM,
NDC
CTA, CTM NOO Li (1998)
CACI,
CI,
CMIC
CACL,
CL,
CMICL
Nesi and
Querci
(1998)
DCC, MOA ANA Bansiya (99)
SPA,
DPA,
SPD,
DPD, SP,
DP, NIP,
OVO
Benlarbi and
Melo (1999)
CBM, IC NOMA,
AMCTKC
Tang et al.
(1999)
NAINH NMImp,
NMInh,
NM,
NAImp,
Totattrib,
NumPar,
Stmts,
NMpub,
NMNpub,
Attrib,
States,
EVNT,
READS,
DELS,
RWD,
LOC,
LOC_B,
LOC_H
Cartwright
and Shepperd
(2000)
NAINH DAM,
MO,
FRIEND
Bieman et al.
(2001)
CBOback,
CBOforward
, CFF, CBB
Wilke and
Kitchenham
(2001)
MFH,
MFA,
MAA,
MOA,
MOS,
HRM,
DAH,
OAM,
MAM,
NOC,
NOA,
NOM,
CIS,
CSB,
CSM
CAM DOI,
ANA
NOP Bansiya and
Davis (2002)
CBOin,
CBOout,
RFCin,
RFCout
Yu et al.
(2002)
MaxHAgg,
Nagg,
NAggH,
NGenH
Genero et al.
(2003)
ACD,
NOI,
NOCU
Zimmermann
et al. (2007)
APPENDIX C. Comparison of Maintainability Evaluation Methods
Variables/Metrics Used Methods Used Dataset used Source
AveLOC (Average Line Of
Code),
ES (Executable Statement),
LC (Line of Comment),
NES (Number of Executable
Statement)
MAT
(Maintainability
Analysis Tool),
Regression,
Halsted metrics,
Cyclomatic
Complexity,
Assessment
Model,
Entropy
HP-MAS(Hewett
Packard-
Maintainability
Assessments
System)by
University of
Idaho Software
Engineering lab,
AFOTEC
Instrument
Zhuo et al.
(1993)
DIT (Depth of Inheritance
Tree),
NOC (Number Of Children),
CBO (Coupling Between
Object),
RFC (Response For a Class),
LCOM (Lack of COhesion of
Method),
WMC (Weight Method per
Class)
Linear, Regression
Analysis
Local data sets Li et al.
(1993)
CBO (Coupling Between Object
classes),
LOC (Line Of Code)
Linear Regression
Analysis
2 ADA
system(UIMS,QU
ES commercial )
Henry et
al. (1995)
CDM (Coupling Dependency
Metric),
CBO (Coupling Between Object
classes),
NSSR (Number of Sub System
Relationship),
RFC (Response set For a Class),
WMC (Weight Method per
Class),
DIT (Depth of Inheritance
Tree),
CHNL (Class Hierarchy Nested
Level),
NCIM (Number of Class
Inheriting a Method),
WIH (Width of Inheritance
Hierarchy),
HIH (Height of Inheritance
Hierarchy)
Class Coupling C++ system
(patient core
management),
113cls,82KLOC,f
ile transfer
facility,29 java
classes, 6 KLOC
Binkley et
al. (1998)
Impact Rate, Effort,
Error Rate,
Subjective Evaluation,
Goodness-of-fit statistics test,
Regression coefficient test,
Multidimensional Assessment,
Albrecht Metrics,
Software Complexity Metric,
Card and agresti's Complexity
metric
Regression
Analysis
MAT(Maintainabi
lity Analysis
Tool)using
FLECS(a
Structured Fortran
preprocessor
Muthanna
et al.
(2000)
SLA (Service Level
Agreement),
KLOC (Kilo Line of Code),
TRCA (Time of Resolution of
Critic Anomalies),
MR (no. of Modification
Request),
UC (Urgent Corrective)
Logistic
Regression,
MANTEMA a
methodology for
maintenance
developed by atos
ODS
Using C++ ,where
the context is very
different i.e. not
available in
COBOL (i.e.
pointers)
Hayes et
al. (2005)
KLOC (Kilo Line of Code),
LCOM (Lack of Cohesion in
Method),
LCC (Loose Class Cohesion),
TNOS (Total No. Of
Statement),
DIT (Depth of Inheritance
Tree),
ICAIC (Inheritance Class-
Attribute Import Coupling),
NICAIC (Non-Inheritance
Class-Attribute Import
Coupling),
ICAEC (Inheritance Class-
Attribute Export Coupling),
NICMIC (Non Inheritance
Class Method Import Coupling),
NIMMIC (Non Inheritance
Method-Method Import
Coupling),
IIC (Inheritance Import
Coupling),
IEC (Inheritance Export
Coupling)
DTRIX parser
used to assess
maintainability
aspects of object
oriented software
Java Systems
FUML and
dynamic object
browser(dobs)
Dagpinar
et al.
(2003)
KA (Key Abstraction),
VOPC (View Of Participating
Classes),
UML (Unified Modeling
Language),
PCA (Principal Component
Analysis),
NC (No. of Classes),
NA (No. of Attributes),
NAGG (No. of Aggregation),
NDEP (No. of Dependencies),
Linear Regression Finding the
replicated data
form from data
description using
ANOVA method
Genero et
al. (2003)
MP (Maintainability Products),
CF (Coupling Factor),
CR (Comment Ratio),
Hdiff (Halstead difficulty),
LCOM (Lack of COhesion in
Methods),
AC (Attribute Complexity),
CC (Cyclomatic Complexity),
TCR (True Comment Ratio),
PM (Perceived Maintainability),
LOC (Line Of Code)
COCOMO
Constructive Cost
estimation Model),
SLIM,
AMEffMo(Adapti
ve Maintenance
Effort Model),
Regression
Analysis
Validation dataset
(the residue even
increase as DLOC
increase)
Hayes et
al. (2004)
Understandability,
Modifiability,
UML (Unified Modeling
Language)
Association,
Aggregation,
Generalization,
Classification
Multilayer
perceptron and
decision trees
(applied to
construct the
maintainability)
Kiewkanya
et al.
(2004)
CR (Comment Ratio),
AC (Attribute Complexity),
LOC (Line Of Codes),
TCR (True Comment Ratio),
MI (Maintainability Index),
LOC (Line Of Codes)
Regression
Analysis DC
Ratio,
Spathic Project
Data from source
code a test
generation tool
Hayes et
al. (2005)
Number of methods,
Number of association
Linear model Local dataset Bocco et
al. (2005)
MA (Multi-criteria Analysis),
WMC (Weighted Methods Per
Class),
NPM (Number of Public
Methods),
CBO (Coupling Between
Objects),
NOP (No. Of Polymorphic
methods),
DIT (Depth of Inheritance
Tree),
LCOM (Lack of COhesion in
methods)
Data Extraction
Process,
Clustering
Maintainability
Methodology to
analyze 1440
classes of
APACHE
Geronima
Antonellis
et al.
(2007)
K-Attractors,
Code-level metrics
Data Extraction
Process,
Multimedia and
GIS(Geographic
Information
Services)
Antonellis
et al.
(2009)
Class Diagram,
LOC (Line Of Code),
MI (Maintainability Index)
NC (Number of Class)
NA (Number of Attributes)
NM (Number of Methods)
Regression
Data from
ISO/IEC 9126 for
reliability and
testability
Rizvi et al.
(2010)
LOC (Line Of Code),
CP (Change Pattern),
ROC (Receiver Operating
Characteristics area)
Linear Local data sets Arisholm
et al.
(2010)
class diagram,
LOC (Line Of Code),
DLOC (Difference Line Of
Code),
NC (Number of Classes),
NA (Number of Attributes),
NM (Number of Methods)
Linear Regression
model
Data collection
from multivariate
maintainability
and modifiability
models
Makker.
(2010)
LOC (Line Of Code),
DLOC (Difference Line Of
Code),
MI (Maintainability Index),
CC (Cyclomatic Complexity)
Multivariate
Linear model
F-Test for
multivariate
analysis
Gautam et
al. (2011)
APPENDIX D. Usability Concepts
Researchers Usability Concepts
Foley and Van Dam
(1982)
User interface guidelines.
Smith and Moiser
(1984)
Described usability as product’s attribute.
Eason (1984) Interrelated usability and functionality.
Gould (1985) Defined usability in terms of learnability, usefulness and ease of
use.
Shackel (1986) Defined usability with the factors effectiveness, learnability,
flexibility and attitude.
Tyldesley (1988) Mentioned 22 factors that could be used to build the metrics
and specifications.
Doll &Torkzadeh
(1988)
End User Computing Satisfaction Instrument (EUCSI).
Ravden & Johnson
(1989)
Presented software inspection as usability evaluation
mechanism.
Igbaria &
Parasuraman (1989)
Enjoyability is directly proportional to acceptance of a system
Booth (1989) He modified Shackel’s criteria into usefulness, effectiveness,
learnability, and attitude.
Polson & Lewis
(1990)
He gave problem solving strategies for novice users to interact
with the complex interface.
Holcomb & Tharp
(1990)
Presented a software usability model for the system designers to
decide which usability sub attributes should be included.
Shackel (1991) Elaborated the usability concept.
Mayhew (1992) Reviewed usability principles to describe the desirable
properties of the interface.
Grudin (1992) Practical acceptability of the system within the various
categories like cost, support, system usefulness.
Nielsen (1993) Presented usability heuristics for the inspection method of
usability evaluation. He classified usability to, learnability,
efficiency, memorability, errors, and satisfaction.
Dumas & Redish
(1993)
explained their definition of usability on the basis of focus on
users, usability means, use of product by users for productivity,
users are busy people trying to accomplish tasks, decision of
user about when the product is easy to use.
Dumas &Redish
(1994)
“people who use the product can produce them so quickly and
easily in order to accomplish their own tasks”.
Preece et al. (1993) Related usability to overall performance of the system and user
satisfaction.
Lowgren(1993) Explained usability as the outcome of relevance, efficiency,
learnability and attitude.
Hix & Hartson (1993) Related usability to the interface efficiency and also to user
reaction to the interface.
Nielsen & Levy
(1994)
Worked on user satisfaction assessment of product.
Logan (1994) divided usability into social and emotional dimension
Caplan(1994) Defined apparent usability as an important consideration in the
design of a software system.
Bevan (1995) Usability replaced by “quality in use”
Lamb (1995)
Claimed usability as a wider concept which includes content
usability, organizational usability and inter organizational
usability.
Guillemette (1995) Reviewed and defined usability with respect to effective use of
information system.
Kurosu &
Kashimura(1995)
Divided usability into Inherent usability and Apparent usability.
Nielsen (1995) Presented “Discount usability engineering”.
Botman (1996) Presented “Do it yourself usability evaluation”.
Butler (1996) Dealt with usability engineering.
Harrison & Rainer
(1996)
Reviewed a model used for computing satisfaction –EUCSI.
Kanis & Hollnagel
(1997)
High degree of usability can be determined when the error rate
of usability is minimum.
Gluck (1997) Correlated Usability to usefulness and usableness.
Tractinsky (1997) Contributed in explaining the concept of Apparent usability.
Lecerof & Paterno
(1998)
Declared functionality being essential to usability.
Thomas (1998) Categorized usability sub attributes into three categories:
outcome, process, and task.
ISO 9241-11(1998) “Guidance on usability” which discusses usability for the
purposes of system requirement specifications and its
evaluation.
Clairmont (1999) “Usability is the degree to which a user can successfully learn
and use a product to achieve a goal”.
Head (1999) “Usability is rooted in cognitive science - the study of how
people perceive and process information through learning, the
use of memory, and attention”
Veldof et al.(1999) Related usability, user’s reaction and system development
Vanderdonckt (1999) Design guidelines and principles to build an effective user
friendly interface.
Kengeri et al. (1999) Explained usability using effectiveness, likability, learnability
and usefulness.
Squires & Preece
(1999)
Usability concept was regarded for pedagogical value for e-
learning systems.
Arms (2000) Aspects of usability that are interface design, functional design,
data and metadata, and the computer systems and networking.
Alred et al. (2000) Related usability to technical/system and human factors.
Battleson et al.(2001) Explained interface design that is easy to learn, remember, and
use, with few errors.
Hudson (2001) The concept of web usability was described.
Turner(2002) Illustrated a checklist for the evaluation of usability.
Blandford
&Buchanan (2002)
Explained usability in terms of technical, cognitive, and social
design. Also, looked into the future work on methods for
analyzing usability.
Palmer (2002) Explained usability in context of web usability.
Oulanov & Pajarillo
(2002)
Interface effectiveness as one of the most important aspects of
interaction”.
Matera et al. (2002) Gave “Systematic usability evaluation”.
Guenther (2003)
Pack (2003)
Illustrated the difficulties in defining usability.
Campbell & Aucoin
(2003)
Explained usability as a relationship between tools and its users.
Abran et al.(2003) Referred usability as a set of multiple concepts, performance of
the system, execution time of a specified task, user satisfaction
and ease of learning.
Quesenbery (2001,
2003)
Presented “the five Es’ of usability” which include
effectiveness, efficiency, engagement, error tolerance, and ease
of learning.
Villers (2004),
Dringus & Cohen
(2005), Miller (2005)
Expressed that usability evaluation methods should consider
pedagogical factors.
Shneiderman and
Plaisant (2005)
Guidelines for error prevention, discussed the system’s
response time, data entry within HCI.
Krug(2006) Studied usability from the user’s perspective based on their
experience.
Dee & Allen (2006) End-user interface conforms to usability principles.
Seffah, Kline &
Padda (2006)
Gave 10 usability factors namely, efficiency, effectiveness,
productivity, satisfaction, learnability, safety, trustfulness,
accessibility, universality, and usefulness are associated with
twenty-six usability measurement criteria.
Brophy & Craven
(2007)
Explained web usability
Tullis & Albert
(2008)
Presented ‘Tips and Tricks for Measuring the User Experience’.
Tullis (2009) Explained ‘Top Ten Myths about Usability’.
Gardner-Bonneau
(2010)
Explained the effectiveness sustained by the software system
when technical changes are made to it.
Bergstrom et al.
(2011)
Conducted iterative usability testing.
APPENDIX E. Usability Attributes in Different Standards and
Models
Attributes Source
operability, training, communicativeness McCall’s (1977)
ease of use, effectiveness, learnability, flexibility, user attitude Shackel (1981,
1986,1991)
task, predefined time Butler (1985)
user satisfaction, type of errors Makoid et al.
(1985)
ease of learn, ease of use Reed (1986)
system performance, system functions, user interface Gould (1988)
usefulness, effectiveness, learnability, attitude Booth (1989)
product, user, ease of use, acceptability of product Bevan et al.
(1991)
human factors, aesthetics, consistency in the user interface, online
and context sensitive help, wizards and agents, user documentation,
training materials
Grady (1992)
comprehensibility, ease of learning, communicativeness factors IEEE Std. 1061
(1992)
users, productivity, tasks, ease of use
Dumas et al.
(1993)
initial performance, long-term performance, learnability,
retainability, advanced feature usage, first impression, long term
user satisfaction
Hix et al. (1993)
result of relevance, efficiency, learnability, attitude Löwgren (1993)
learnability, efficiency, memorability, few errors, satisfaction Nielsen (1993)
efficiency, affect, helpfulness, control, learnability Porteous et al.
(1993)
learnability, throughput, attitude, flexibility Preece (1994)
system usefulness, information quality, interface quality Lewis (1995)
usableness, usefulness Gluck (1997)
learnability, efficiency, memorability, satisfaction , flexibility, first
impressions, advanced feature usage, evolvability
Wixon (1997)
learnability, flexibility, robustness Dix et al. (1998)
efficiency, effectiveness, and satisfaction ISO 9241-11
(1998)
users’ needs, efficiency, users’ subjective feelings, learnability,
system’s safety
Lecerof et al.
(1998)
outcome, process, task Thomas (2009)
learnability, efficiency in use, rememberability, reliability in use,
user satisfaction
Constantine
(1999)
effectiveness, likeability, learnability, usefulness Kengeri et al.
(1999)
interface design, functional design, data and metadata, computer
systems network
Arms (2000)
easy to learn, rememberability, few errors, support Battleson et al.
(2001)
effectiveness, efficiency, satisfaction, productivity, safety,
internationality, accessibility
Donyaee et al.
(2001)
understandability, learnability, operability, attractiveness, usability-
compliance
ISO/IEC 9126-1
(2001)
functionally correct, efficient to use, easy to learn, easy to
remember, error tolerant, subjectively pleasing
Brinck et al.
(2002)
interface effectiveness Kim (2002)
affect, efficiency, control, helpfulness, adaptability Oulanov (2002)
modifiability, scalability, reusability, performance, security Bass et al.
(2003)
easy to learn, easy to use, easy to remember, error tolerant,
subjectively pleasing
Campbell et al.
(2003)
time to learn, speed of performance, rate of errors by users, retention
over time, subjective satisfaction
Shneiderman and
Plaisant (2005)
task times, completion rates, errors, post task satisfaction, post-test
satisfaction
Sauro et al.
(2009)
APPENDIX F. Comparison of Usability Measurement Methods
Evalu
ati
on
Met
hod
Typ
e
Evalu
ati
on
Met
hod
Ap
pli
cab
le
Sta
ges
Des
crip
tion
Pro
s
Con
s
Testing
Coaching
method
Design,
code, test
and
deploymen
t
Collects
information
about the
needs of
the user
Coach is
easy to find
users usage
problems
on the spot
Overall
interaction
between
coach and
users is not
so good
and they
find less
usability
problems
Performance
Measurement
Design,
code, test
and
deploymen
t
Collects
information
about
performanc
e of an
organizatio
n or an
individual
Compares
different
interfaces
and checks
if aim of
the user
has been
met or not
It gives
emphasis to
first time
usage and
covers only
a limited
number of
interface
features
Question
Asking
Protocol
Design,
code, test
and
deploymen
t
Users
ability to
answer
questions is
checked
It is simple
and
through
this
protocol
we know
what parts
of interface
were
obvious
and what
were not
Interpretati
on for this
can be
wrong
Retrospectiv
e Testing
Design,
code, test
and
deploymen
It gives a
walkthroug
h of the
performanc
Used for
participants
for whom
talking or
It is time
consuming
Testing
t e recorded
previously
on video.
writing and
working
may be
difficult
Thinking
aloud
protocol
Design,
code, test
and
deploymen
t
It is
conducted
with
experiment
ers who
videotape
the subjects
and
perform
detailed
protocol
analysis.
It is not so
expensive
and the
results are
close to the
observation
s made by
users
It is not
user
friendly
protocol
Co-
Discovery
Learning
Design,
code, test
and
deploymen
t
It involves
observation
of two
users
working on
same task.
Users feel
free to
discuss
with each
other
Difference
in learning
and culture
style may
affect the
feedback
Teaching
Method
Design,
code, test
and
deploymen
t
Used as an
alternative
to think
aloud
method
Number of
verbalizati
ons are
more hence
the
participant
interactive
behavior
provides
the
participants
’ though
process and
search
strategy
It is time
consuming
since
briefing the
participants
is
necessary
Remote
Testing
Design,
code, test
and
deploymen
t
Testers can
view user’s
interaction
in real time
Three major
issues
(effective-
ess,
efficiency
and
An
additional
software is
also
required to
observe
satisfaction)
of usability
are covered
users from
distance
Inspection
Cognitive
walkthrough
Design,
code, test
and
deploymen
t
A group of
experts
examine
the code in
a certain
pattern to
search
problems
It does not
require a
fully
functional
prototype
It does not
address
user
satisfaction
as well as
efficiency
Heuristic
Evaluation
Design,
code, test
and
deploymen
t
Finds
usability
problems
in user
interface
No need of
formal
training
required
for
evaluators
Biased by
preconcepti
ons of
evaluators
Feature
Inspection
Code, test
and
deploymen
t
It lists
features
used to
accomplish
tasks.
It does not
require
large
number of
evaluators.
Takes a
long time if
applied for
all features
of the
system
Pluralistic
Walkthrough
Design Identifies
usability
issues in a
piece of
software
more
number of
usability
can be
found at a
time
The most
important
issue of
usability
i.e.
efficiency
is not
addressed
Card
Sorting
User
requirem-
ents and
early
design
Technique
that
involves
users to
group
information
for a web
site.
It is simple,
organized,
cheap and
fast to
execute
Results of
card sorting
may vary
Tree
Testing
user
requireme-
nts and
design
Reverse of
card
sorting
Allows to
test
navigation
visually
thus it
ensures
reliability
It is not
moderated
thus
researchers
cannot see
users or
participants
Inquiry
Field -
observation
Test and
deploymen
t
Collects
detailed
information
about how
people
work and
the context
in which
works
takes place.
It is highly
reliable and
less
expensive
Some task
may not be
in the
manner
they are
observed
Interviews/
Focus groups
Context
and user
requireme-
nts and
testing
Takes out
views and
understandi
ngs of the
users about
a selected
topic
Useful
ideas are
produced
which also
results in
healthy
customer
relations
Data
collected
has low
validity
Proactive
Field study
Requireme
-nt and
design
It is used in
early
design
stage to
understand
the needs
of the users
Individual
users
characterist
ics, task
analysis
and
functional
analysis is
found
It cannot be
conducted
remotely
and
collected
data is not
quantitative
Logging
Actual use
Test and
deploymen
t
Automatica
lly collect
statistics
about the
detailed
use of the
system
It can show
the
statistics of
each action
It shows
what users
did and not
why they
did it
Surveys Test and
deploymen
t
Acquire
information
focused
directly on
problems
and
conclusions
.
It is
comparativ
ely faster
in
determinin
g
preference
of large
user groups
It does not
capture
details and
may not
even permit
the follow-
up
SUMI
Questionnaire User
requiremen
ts and
testing
It is a
method of
measuring
quality
from user’s
point of
view
Provides
objective
way of
assessing
users
satisfaction
Results
produced
by SUMI
are only
valid if
questionnai
re has been
administere
d in same
way to all
users and if
results are
interpreted
properly
and
carefully.
MUSiC
Context
Analysis
It aims on
achieving
qualitative
and
quantitative
data to
support the
systems
It is used to
find
performanc
e metrics
of the user
It fails to
capture an
accurate
and
reviewable
record
DRUM
Video
Recording
It finds
diagnostic
information
from an
analysis of
videotape
It helps
analyst to
create a
time log of
the user
actions
it needs to
be licensed
to
organizatio
ns because
of the risks
involved.
APPENDIX G. Usability Questionnaire
SECTION 1.
Dear Sir/Madam,
This questionnaire has been prepared to measure the importance of proposed usability
factors of the software systems. The aim of this questionnaire is to gather software
projects’ specific data in order to evaluate usability of object-oriented system. Within
the context of this project, Usability is a software quality factor that defined as the ease
with which a user can learn to operate, prepare inputs for, and interpret outputs of a
system or component.
Usability Sub-Factors
Effectiveness: It can be described as a performance measure of the system to complete a
specified task or goal successfully within time
Efficiency: It can be described as the extent of successfulness of a task by a system. Its
concept is related to accuracy and completeness of the specified goals.
Satisfaction: It can be described as the user’s acceptability of the system, in the specified
context of use.
Learnability: It can be described as the capability of the software product to enable the
user to learn its application
The above mentioned characteristics can be measured using the following software
metrics:
1. Weighted Methods per Class (WMC)
It is the count of methods implemented within a class or the sum of complexities of the
methods.
2. Depth of Inheritance (DIT)
It is the number of steps from the class node to the root of the tree and is measured by
number of ancestor classes.
3. Coupling between Object Classes (CBO)
CBO is a count of number of other classes to which a class is coupled. It is measured by
number of distinct non-inheritance related class hierarchies on which a class depends.
4. Responses for a Class (RFC)
Count of the set of all the methods that can be invoked in response to a message to an
object of the class or by some method of the class.
5. Lack of Cohesion (LCOM)
Cohesion is the degree to which the methods in a class are related to one another.
LCOM measures the dissimilarity of methods in a class by instance variable or
attributes.
6. Number of Children (NOC)
NOC is defined as the number of immediate child classes derived from a base class.
Note: The values of all the above metrics are inversely proportional to the usability of
system.
Guidelines to fill the Questionnaire
(i) The following scale (intensity of importance) is to be referred for filling in
the matrices in the Questionnaire:
Intensity of
Importance
Definition Explanation
1 Equal Importance Two Factors contributes equally to the objective
3 Somewhat more
Important
Experience and judgment slightly favor one over
the another
5 Much more
important
Experience and judgment strongly favor one over
the another
7 Very much more
important
Experience and judgment very strongly favor one
over another. Its Importance is demonstrated in
practice.
9 Absolutely more The evidence favoring one over the other is of the
important highest possible validity.
Table G.1: Scale for Relative Importance
(ii) An Example to demonstrate the procedure
Note: Each entry in the matrix denotes the performance/importance of the factor
given in the row as compared to the factor given in the corresponding column.
A matrix for pair wise comparisons of 4 factors (A, B, C and D) affecting Usability of a
system is given below. Let us assume the following criteria for filling the matrix given
below:
1. factor ‘A’ is ‘somewhat more important’ than factor ‘B’, hence value is 3.
2. factor ‘C’ is ‘much more important’ than factor ‘A’. Then, factor ‘A’ must be
‘much less important’ than factor ‘C’. Hence value is 1/5.
3. factor ‘C’ be ‘absolutely more important’ than factor ‘B’. Then, factor ‘B’ must
be ‘absolutely less important’ than factor ‘C’. Hence value is 1/9.
By referring to Table G.1, we get the following values:
USABILITY
A B C
A 1 3 1/5
B 1 1/9
C 1
NOTE: The data you provide as part of this survey will be used in the research and will
be kept confidential. The details of any participating respondent will not be directly
published anywhere without prior permission.
SECTION 2
QUESTIONNAIRE
Company’s Name (Optional):
Respondent’s Name (Optional):
Respondent’s Designation (Optional):
Domain/Specialization Area:
PAIRWISE COMPARISON OF ATTRIBUTES
Based on your perception, please fill in the matrices 1, 2, 3, 4 and 5 by rating each
attribute from 1-9 according to the above-mentioned guidelines and Ttable G.1.
MATRICES:
1. PAIRWISE COMPARISON
Effectiveness
Efficiency Satisfaction Learnability
Effectiveness 1
Efficiency
1
Satisfaction
1
Learnability
1
2. EFFECTIVENESS
WMC RFOC LCOM DIT NOC CBO
WMC 1
RFOC 1
LCOM 1
DIT 1
NOC 1
CBO 1
3. EFFICIENCY
WMC RFOC LCOM DIT NOC CBO
WMC 1
RFOC 1
LCOM 1
DIT 1
NOC 1
CBO 1
4. SATISFACTION
WMC RFOC LCOM DIT NOC CBO
WMC 1
RFOC 1
LCOM 1
DIT 1
NOC 1
CBO 1
5. LEARNABILITY
WMC RFOC LCOM DIT NOC CBO
WMC 1
RFOC 1
LCOM 1
DIT 1
NOC 1
APPENDIX H. Snap Shot of the Values of CK Metrics
H.1 CK Metrics for Project1
H.2 CK Metrics for Project2
Top Related