Software Quality Metrics Do's and Don'ts - QAI-Quest 1 Hour Presentation
-
Upload
xbosoft -
Category
Technology
-
view
292 -
download
2
description
Transcript of Software Quality Metrics Do's and Don'ts - QAI-Quest 1 Hour Presentation
Quest 2014 1
"Software Quality Metrics Do’s and Don’ts"
Philip Lew
Meet Your Instructor • Team Lead, Data warehousing product
development • Software product manager, BI product • COO, large IT services company • CEO, XBOSoft, software qa and testing services • Relevant specialties
– Software quality process improvement – Software quality evaluation and measurement – Software quality in use / UX design – Mobile User Experience and usability
Quest 2014 3
Let’s Meet Each Other
• Name • Your Company/Role • InteresAng Adbit or fact • ObjecAve in being here
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 4
Some InformaAon
• Used 1 hour efficiently and covered the material. It was interacAve too.
• Presenter was comfortable and engaging. Content was good.
• Very interacAve and demanding, commands great aWenAon.
• Best content. Provided great brainstorming opportuniAes and gave examples. Great presenter.
Quest 2014 5
More informaAon • Slides too low where people’s heads are. Can’t see. • How do we measure quality? You didn’t answer that at all. Very disappointed.
• I realize interacAon is good, but found the discussions to be distracAng.
• It was too hard to hear what was said. • Instructor only focused on people in front who’s hands were up.
• Too large a group. • Slides were different.
Quest 2014 6
Session Spirit and Expectations • Interactive given time and other context • I won’t read the slides…
– Slides for you as a take-away – Slightly different than your handouts – Some new examples and ideas – Write me and I’ll send these to you
• I’ll repeat questions (if I can remember)
7 Quest 2014
Quest 2014 8
Motivation Why is QA Measurement Important
• Quality Assurance measurement provides informaAon to answer quesAons about your tesAng, your team, and your company. – Is your QA team improving? – Have you reached the highest state of efficacy? – Is your tesAng as thorough as it can be? – Is soMware ready for release? Can we stop tesAng now? – Are you able to do everything you want within budget? – What parts of the soMware are at most risk ?
• If the answer to any one of those quesAons is “no”, or “I don’t know”, then you are in the right place.
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 9
Meal of Metrics
Schedule variance Effort
variance Cost
variance Defect removal
effecAveness
ProducAvity
Defect aging
CriAcal Defect rate
Test cases
executed
Test coverage
Pass/Fail Rate
ROI
AutomaAon coverage
10
Value Provided
%
Defects/hour
Customer saAsfacAon
Avg. Time to Fix
Recurrence Rate
Code complexity
Test Density
Defect Find Rate
Defect Arrival Rate
Requirements
VolaAlity
Requirements
Ambiguity
Planned/Actual
OverAme Rate
CompleAon Rate
Velocity
ExecuAon Rate
© 2014 XBOSoM, Inc.-‐ All Rights Reserved.
Don’t – Calculate metrics that don’t help answer specific questions
• Are you collecting measurements and calculating metrics without thinking what answers will I get after collecting this information?
Quest 2014 11
POLL: How many metrics are you collecting on a regular
basis within your organization? A. 1-5
B. 6-10 C. 11-15 D. 0
Metrics - Benefits • Understand how QA, tesAng, and its processes and where the problems are
• Evaluate the process and the product in the right context
• Predict and control process and product qualiAes • Reuse successful experiences
– Feed back experience to current and future projects • Monitor how something is performing • Analyze the informaAon provided to drive improvements
13 Quest 2014
How can measurement help us (YOU) • Create a organizational memory – baselines of current
practices-situation • Determine strengths and weaknesses of the current
process and product – What types of errors are most common?
• Develop reasoning for adopting/refining techniques – What techniques will minimize the problems?
• Assess the impact of techniques – Does more/less manual functional testing versus
automation reduce defects? • Evaluate the quality of the process/product
– Are we applying inspections appropriately? – What is the reliability of the product before/after
delivery? 14 Quest 2014
Boil it Down-Understand, Evaluate and Improve
• To do this… We need metrics Can you think of other examples in our lives
where this applies? Where do you use metrics to evaluate and improve?
15 Quest 2014
Understand Evaluate Improve
Group Exercise
• How does your organization define software quality?
• Come up with a definition of software quality that you can agree on. – Can be one sentence to a short
paragraph.
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 16
If the goal is to improve software quality, then what is IT?
Metrics in real life
Food Eaten Weight Performance Race Results
17
• Calories • Fat • Carbohydrates • Protein • Time of day • Vitamins • …
• Blood pressure • Cholesterol • Blood glucose • Red cell count • White cell count • Hematocrit • Hemoglobin • Body fat % • …
• Placing • …
• Effort/Power • Heart rate/Watts • Speed • Time
Intelligence Finesse
Context • Training • Sleep
Quest 2014
DO Think about the process you are measuring and measure at each
step in the process.
Software Quality Components
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 19
ISO 25010 CMMI
Evolution of SW Quality
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 20
Software processes Code
Process quality
Software Quality
(internal)
Software Quality
(external)
Software Quality (in use)
Product Usability
CMMI assessment
model
white box testing
black box testing
Usability testing Usability heuristic Usability logging
CMMI Assessment Company
Programmer Tester End User
Type of quality
What is measured
How measured?
Who measures?
ISO 9000 ISO 9241 ISO 9126 ISO 25010
Total Quality View (organizational)
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 21
For Those in Agile
Waterfall • Speed • Quality • Cost
Agile • Speed • Quality • Cost
Quest 2014 22
Where’s the beef?
Be-‐Do-‐Have
Be
Do
Have Quest 2014 23
Be-‐Do-‐Have
Be
Do
Have Quest 2014 24
Process • Iterative (sprints) • Daily standups • Face to face communication • Post mortem – end of sprint • Delivery meeting – end of sprint • Planning meeting – before sprint • Self organizing
People • Communicative • Collaborative/Cooperative • Flexible and willing • Knowledgeable-multi • Initiative/responsible • Responsive
Results • Speed • Quality
Velocity Defects
Adherence to plan (many) Overtime Emails Defect Aging Defects not fixed rate
Measuring Quality
• Within each phase, we model to understand and evaluate quality at that phase
• At each phase, define quality and develop quality measures
• The lower level measurements will aggregate to higher cumulative measurements
• Aggregation can be done with various weighting schemes with flexibility according to the business of the organization
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 25
Don’t – Measure the wrong thing
• Make sure your metrics help you determine if meeting a goal
• Some sample metrics to review: – Test Coverage = Number of units (KLOC/FP) tested /
total size of the system – Test Density-Number of tests per unit size = Number
of test cases per KLOC/FP – Test cases executed – Defects per size = Defects detected / system size – Defects found per tester – Test cases written per day
Quest 2014 26
Don’t measure something just because you can
“Not everything that can be counted counts, and not
everything that counts can be counted.”
Albert Einstein
Don’t measure something just because you can.
DO
Be clear about WHY you are measuring.
Why we need to measure ?
• Our bosses want us to… • They want someone to point fingers at • They want to fire some people and save money • They need to report to their managers • They want some basis on which to evaluate us and give us a raise!
• We need to figure out a way to do beWer! • We want to improve our work and improve soMware quality
29 Quest 2014
The Metrics Conundrum
• QA and Testing Language – Defects – Execution status – Test cases – Pass/fail rates – DRE…
• Business Language – Cost effecAve – ROI – Cost of ownership – Cost of poor quality – ProducAvity – Calls to help desk – Customer saAsfacAon
– Customer retenAon 30 Quest 2014
In your organization…
• What measurements do you take in your organization and why?
• Who uses them and for what?
31 Quest 2014
The Metric Reality
• Measurement and metrics are like dinner. It takes 2-3 hours to make dinner, and 15 minutes to consume…
• But… many metrics are never reviewed or analyzed (consumed)
• WHY?
32 Quest 2014
The Metric Conundrum • Test leads and test managers rarely have the
right metrics to show or quantify value • Metric collection and reporting are a drag • QA metrics usually focus only on test execution • Test tools don’t have most of the metrics we
want • Reports generated by QA are only rarely
reviewed • Metrics are not connected to anything of value/
meaningful for ________.
33 Quest 2014
Don’t – Forget to differentiate between quality, testing and defects
• Metric becomes the goal • Organizations concentrated on “the
metrics”, forget to understand the metric’s relationship to the goal or objective.
• Defect counts need to be incorporated into an overall valuation because Quality is ultimately measured in the eyes of the end user.
Quest 2014 34
Software Quality
Evaluation Frameworks
Goal Question
Metric
Why and What Metrics
Understanding Software Quality
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 35
Business&Product Processes/Lifecycles
Software Dev
Processes
TesAng Metrics
Software Testing Processes
Software Quality Models
Software Quality Processes
Don’t – Forget about context
• Metrics don’t have consistent context so they are unreliable – Context needs to be defined and then maintained for measurements to be meaningful.
• Difficult in today’s environment with changing test platforms and other contextual factors.
Quest 2014 36
What contextual factors could there be?
• Release complexity • Development methodology • Software maturity • Development team maturity and expertise • Development team and QA integration • Resources available • User base
Quest 2014 37
All metrics need to be normalized for proper interpretation
Metrics need context to tell the whole story
• Normalized per function point (or per LOC) • At product delivery (first X months or first year of
operation) • Ongoing (per year of operation) • By level of severity
– Gross numbers don’t tell much • By category or cause, e.g.: requirements defect,
design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc. – Absolute and Total numbers tell 0 – Too granular also tell 0
Quest 2014 38
Don’t – Be sporadic or irregular
• Measurements used are not consistent – Just as context needs to be consistent, so do the measurements, methods, and time intervals that you collect the measurements and calculate the metrics.
• Just as in weighing yourself, it doesn’t make sense to drink 2 gallons one day and weigh in, and go jogging 10 miles the next day and weigh in.
Quest 2014 39
Measurements Quality • Two facets of measurement quality are reliability and validity.
• Reliability: means how repeatable the results are and low reliability indicates that many random errors are occurring during measurement.
• Validity: means the metric measures what it intended to measure. – Errors in validity, called systemaAc errors, are usually experienced because not all factors were taken into account when the measurement was designed.
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 40
Metrics Abandoned for Many Reasons
• How many of you have had these problems?
– That measurement is not valid because…
– That measurement is not reliable because…
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 41
If no one believes (and therefore takes improvement actions) your metrics,
then why bother?
Do Understand Metric Terms
• What is a metric? • What is a measurement? • What is an indicator? • What’s the difference?
42 © 2014 XBOSoM, Inc.-‐ All Rights Reserved.
Measures Versus Metrics Measures • To be executed • Executed • Passed • Failed • Re-‐executed • Total ExecuAons • Total Passes • Total Failures
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 43
Metrics • TC % complete • TC % passed • % failures • % defects
corrected • % bad fixes • % defect
discovery
Don’t – Collect measurements that no one wants
• Metrics have no audience – As a corollary to the previous factor, if there is no question to be answered, then there will also be no audience for the metric.
• Metrics need to have an audience in order to have meaning.
• How many of the metrics and reports that you generate are read?
Quest 2014 44
Poll: How many of you collect metrics that you don’t need or
use?
Group Exercise Who are your stakeholders?
• CEO/CTO/CIO • Development manager/VP • QA manager • Others?
Quest 2014 46
Do - Collect what “they” want
• Ratios and percentages rather than absolutes
• Comparisons over time, or by release • Report on what leads to improvement in:
– Costs – Time – Quality
Quest 2014 47
Do - Collect what they want Costs (Some potenAal metrics include): • Business losses per defect that occurs during operaAon • Business interrupAon costs • Costs of work-‐arounds • Costs of reviews, inspecAons and prevenAve measures • Costs of test planning and preparaAon • Costs of test execuAon, defect tracking, version and change control • Costs of test tools and tool support • Costs of test case library maintenance • Costs of tesAng & QA educaAon associated with the product • Re-‐work effort (hours, as a percentage of the original coding hours) • Lost sales or goodwill • Annual QA and tesAng cost (per funcAon point)
Quest 2014 48
Do - Collect what they want
Time-‐Resources (Some potenAal metrics include): • Labor hours/defect fix • Turnaround Ame for defect fixes, by level of severity
• Time for minor vs. major enhancements – actual vs. planned elapsed Ame
• Effort for minor vs. major enhancements • actual vs. planned effort hours
Quest 2014 49
Do - Collect what they want
Quality (Some potenAal metrics include): • Survey before, after (and ongoing) product delivery • # system enhancement requests per year • # maintenance fix requests per year • User problems: call volume to customer service/Tech support • User Satisfaction
– training time per new user, time to reach task time of x – # errors per new user
• # product recalls or fix/patch releases/year • # production re-runs • Availability (Ame system is available/ Ame the system is needed to be
available)
Quest 2014 50
Do Collect what they want
• Show them in combination and relative to each other – Cost vs. quality – Cost vs. time – Quality vs. time
Quest 2014 51
Don’t – Make the collection effort the end game
• Measurements are too hard to get – If you end up designing the right metric to answer the right question, it doesn’t matter if it takes several man days to get the data and do the calculations.
• Unless the value and decisions made from these metrics have considerable value, they’ll soon be abandoned.
Quest 2014 52
Poll: How many of you started to collect metrics but then found it
was too difficult or time consuming and quit?
Don’t – Forget indicators
• Metrics have no indicators so cannot evaluate – You collect mounds of data but then what? – How do you determine what is ‘good’ or ‘bad’? – Before you get started collecting and
calculating you need to put together a way to evaluate the numbers you get with meaningful indicators that can be used as benchmarks as your metrics program matures.
Quest 2014 54
Indicators • Indicators change numbers into meaning • What does 68% mean?
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 55
Don’t Mix Up Causality • The cause and effect relaAonship has three requirements:
– cause precedes effect in Ame or by logic. – shown cause and effect relaAonship – direct constant relaAonship
• When the relaAonship between variables can be graphed and a line of best fit can be found • Be careful of extreme and inaccurate data values. • When defining a casual relaAonship be sure to watch for misguided interpreted
relaAonships. CASE 1 Z causes changes in X Z causes changes in Y X is perceived to cause changes in Y or vice versa and Z is overlooked CASE 2 X causes changes in Z Z causes changes in Y X is perceived to cause changes in Y and Z is overlooked
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 56
Indictors and Interpretation
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 57
Conclusions • Designing and implementing a software quality
metrics program requires more than defect analysis.
• Think - who and what questions that you want to answer or goals of using metrics. – Many refer to this as the goal-question-metric
paradigm (GQM) – In simple terms, what are you going to do with
the numbers once you get them? • Most of the “Don’ts” are related to not thinking
about the objectives of the metrics and actions you will take based on them.
Quest 2014 58
Solutions • Develop a stakeholders list and their goals-
objectives • Develop a list of questions that, if answered,
would determine that the goals are met • Develop a catalogue of metrics (that answer the
questions) that can mix and match to apply to the goals depending on the stakeholder
• Develop and collect metrics that accompany each part of the development process, not just testing – There are many “defects” not directly in dev.
Quest 2014 59
Measurement Framework Improvement
Decisions
Stakeholders
60 Quest 2014
Do Develop a Metrics Framework
SoMware Quality Dashboard (model)
Process
SoMware Quality
Product
Users People Quality of the people Quality of the stakeholders goals and objectives (GQM)
Quality Processes to implement the goals And the technology to support them
Viewpoint and measurement of quality from the end user If the ‘people’ are connected well to end users, this should also be high quality; with incorporation of context.
High quality people, with high quality processes should produce repeatable high quality product From both internal and external quality views
© 2014 XBOSoM, Inc.-‐ All Rights Reserved. 61
Thanks Q&A
www.xbosoft.com @xbosoft 408-350-0508
Philip Lew @philiplew [email protected]
White Papers: http://www.xbosoft.com/knowledge_center/
/xbosoft
Blog: http://blog.xbosoft.com/
/xbosoft