Test Driven Development and Quality Improvement
-
Upload
carlos-solis -
Category
Technology
-
view
361 -
download
0
description
Transcript of Test Driven Development and Quality Improvement
Lero© 2010
Test Driven Development and Quality Improvement
Carlos Solis, John Noll, and Xiaofeng Wang
Lero© 2010
Contents
• Test Driven Development• Unit testing • Code coverage• The analysis of two open source projects
Lero© 2010
Waterfall model
Lero© 2010
Prioritizing the requirements
Lero© 2010
Iterative and incremental development
Analysis-i
Design-i
Implementation-i
Testing-i
InitialRequirementsList
Cycle-1 Cycle-2 Cycle-n-1 Cycle-nCycle-i
Lero© 2010
Interlinked failures
•The newer code can cause a failure in the code developed in previous iterations.
Lero© 2010
Iterative and incremental development
AnalysisSet-i
Design-i
Implementation-i
Testing-i,Testing-i-1
…Testing-1
Cycle-1 Cycle-2 Cycle-n
InitialRequirementsList
Lero© 2010
Test Driven Development
Test Driven Development is an evolutionary approach that relies on very short development cycles and the practices of writing automated tests before writing functional code, refactoring and continuous integration.
Lero© 2010
Agile methods
• Agile methods are the most popular iterative and incremental software methods.
Lero© 2010
Testing scope by granularity
• Unit test. Do functions, classes and modules work as expected?
• Integration Test. Do modules interact and work with other modules as expected?
• Acceptance Test. Does the system work as expected?
Lero© 2010
White box testing
• In white box testing the tester has access to the code that wants to tests, and often requires to program the test.
• Unit test and Integration Test are whites box tests.
Input Output
Lero© 2010
Black box testing
An acceptance test is a black box test.
It is a test of the functionality of a system or module without knowing its internal structure.
InputOutput
Lero© 2010
Test Driven Development
RequirementsSet-i
Design-i
Implementation-i
Cycle-1 Cycle-2 Cycle-n
InitialRequirementsList
UnitTest-i
RefactorAcceptancetesting
AcceptanceTest-i
Lero© 2010
Unit testing
• A unit test tests a small portion of code usually methods and classes, in an independent way.
Lero© 2010
Unit testing in JUnit
public class WikiPageTest extends TestCase {
protected WikiPage fixture = null;
protected void setUp() throws Exception { setFixture(ShywikisFactory.createWikiPage());}
protected void setFixture(WikiPage fixture) {this.fixture = fixture;
}
protected void tearDown() throws Exception { setFixture(null);}
Lero© 2010
Unit testing in JUnit
public void testAddVersion() { WikiPage wp = this.fixture;
User user1 = ShywikisFactory.createUser();user1.setName("user1");String body = "Hi!";
Version actual = wp.getActual();assertNull(actual);
Version newVer = wp.addVersion(user1, body); actual = wp.getActual(); assertEquals(actual,newVer); assertEquals(user1,actual.getUser()); assertEquals(body,actual.getBody()); …
Lero© 2010
Unit testing in JUnit
public void testVersionsOrder() { …
for(i=0;i<5;i++){body = "body" + i;wp.addVersion(user, body);
}
Iterator<Version> it = wp.getVersions().iterator();i = 0;while(it.hasNext()){
Version ver = it.next();assertEquals("user" + i, ver.getUser().getName());assertEquals("body" + i, ver.getBody());i++;
}
Lero© 2010
Independence: Mock Objects
Mock Objects are objects that simulate the behavior of other objects in a controlled way.
Mock Objects have the same interface than the objects they simulate.
Lero© 2010
Independence: Mock Objects
public Version createWikiPage(String name, String user){ …
WikiPage wp = ShywikisFactory.createWikiPage(); wp.setName(name);
session.save(wp); … }
public class HBWikiPagePerfomer implements WikiPagePerfomer{
Session session;
Lero© 2010
Example: Mock Objects
public class SessionMock implements Session {
public void cancelQuery() throws HibernateException {}
public Connection close() throws HibernateException {return null;
}….
public Serializable save(Object arg0) throws HibernateException {
return null; }
Lero© 2010
Example: Mock Objects
public void testCreateWikiPage(){ HBWikiPagePerfomer hbp = getFixture();
assertNull(hbp.createWikiPage("wp1", "user1"));}
protected void setUp() throws Exception { HBWikiPagePerfomer hbp = new HBWikiPagePerfomer(); hbp.setSession(new SessionMock()); setFixture(hbp);}
Lero© 2010
Complex tests: Mock Objects
Communication Layer (TCP/IP)
Bloomberg Trade Protocol
Financial System Client
BloombergServer
TCP/IP
getReport(X) Init Bye
How can we test this?
Lero© 2010
Code coverage
The degree of how much a code is exercised by a set of automated tests is measured by its test coverage.
The test coverage measures how complete is a set of tests in terms of how many instructions, branches, or blocks are covered when unit tests are executed.
Lero© 2010
Code coverage
Lero© 2010
Code coverage
Lero© 2010
Integration testing
Integration test is about testing the interacting modules in an application.
There are some frameworks for integration testing such as dbUnit, httpUnit, etc.
Lero© 2010
Integration testing
In some cases unit test frameworks can be used to perform integration and acceptance tests.
Integration Testing. The unit tests instead of using mock objects, would use testing real objects (a testing database, for example).
Lero© 2010
Integration Test
protected void setUp() throws Exception { Session session = sessionFactory.openSession(“test.cfg”); HBWikiPagePerfomer hbp = new HBWikiPagePerfomer(); hbp.setSession(session); setFixture(hbp);}
An approach is to reuse the unit test using
different setup and teardown methods.
Lero© 2010
Automated Test Driven Development
RequirementsSet-i
Design-i
Implementation-i
Start
UnitTest-i
Refactor
AutomatedAcceptance
Test-i
NOKOK
Lero© 2010
Acceptance test
An acceptance test is a test that proves that a requirement does what is in its specification, therefore, if the test is passed the customer accepts that the requirement is fully implemented.
Lero© 2010
Acceptance test
Each requirement has a set of specific cases or scenarios.
Each scenario has a set of test.
A requirement is accepted if the tests of all the scenarios are satisfied.
Lero© 2010
Acceptance Test
User interface
Fit(Simulated User input)
Input ControllerObjects
BusinessOr
ModelObjects
Unit Testand Integration Test
Lero© 2010
Acceptance test
There are automated acceptance testing frameworks such as cucumber, jbehave, fit and fitness.
Lero© 2010
User Stories and Scenarios using BDD ubiquitous language and plain text
Lero© 2010
Automated Acceptance Testing and Readable Behaviour Oriented Specification Code
Lero© 2010
The adoption problem
• Developers have to learn to write effective unit test. These tests will help to reduce bugs and to have better software designs.
• It is better to have more testers or to allow developers to learn TDD? Ask accountability.
Lero© 2010
The adoption problem
• Adopt it progressively. First automated unit testing, later automatic acceptance and integration testing.
• If the organizational structure follows the waterfall, there will be resistance to adopt it.
Lero© 2010
Our research: Which is its effect on quality?
RequirementsSet-i
Design-i
Implementation-i
AcceptanceTest-i
Start
UnitTest-i
Refactor
NOK
OK
Start NextCycle
Lero© 2010
Our research
Automated testing has a positive effect on the quality of code in an OSS context.
Lero© 2010
Projects analysed
Lero© 2010
Approach
• Open source projects with automated tests and with well documented bug repositories.
• Bug density. Instead of using the bugs reported, we have used the modifications in the source code repository that have happened after the release date. Each file in a project has an associated number of post release modifications.
• The projects’ tests were executed and the test coverage calculated.
Lero© 2010
Approach
• The final step was to analyze the data to see if there is a relationship between the test coverage and the number of post release modifications of the files.
Lero© 2010
Bug density
Release datePrevious Release date
Bug or fix commit
Increment the defect count for each file associated with entries that were determined to be actual bug fixes.
Lero© 2010
Bug Density
Lero© 2010
Code coverage and defect density
Lero© 2010
JFreeChart
Lero© 2010
OrangeHRM
Lero© 2010
Files with prm OrangeHRM
Lero© 2010
Files with prm JFreeChart
Lero© 2010
Correlation
Lero© 2010
Result analysis
• If Spearman’s rank correlation coefficient is negative, there is a negative correlation between test coverage and post-release fix density; in other words, higher coveragemay mean lower fix rates.
• The significance of this correlation is measured by the p-value, which measures the probability that the correlation would be observed when the Null hypothesis is true.
Lero© 2010
Result analysis
• There is a small negative correlation for each project.
• JFreeChart, a p-value of 0.0993 means that there is less than 10% probability that this observed negative correlation occurred by chance.
• OrangeHRM data have a p-value of 0.887, meaning the relationship is surely due to random chance.
Lero© 2010
Result analysis
• Defects are not distributed evenly across all files. When the analysis is limited only to files that experienced release fixes, the negative correlation is larger for both projects.
• The p-values are JFreeChart p-value of 0.084. For OrangeHRM is 0.0364.
• As such, there is reason to reject the Null hypothesis with some caution, and conclude that increased statement coverage might improve post-release defect density.
Lero© 2010
Future work
• We think that we have to normalize using the pre-release modification of each file.
• We would have to calculate the correlation between (post-mr / pre-mr) and file coverage.
Lero© 2010
Thank you
Questions?