TETERprofessionaltester.com/.../ProfessionalTester-February2013-Holmes.pdf · TETER SUBSCRIBE...

4
Essential for software testers TE TER SUBSCRIBE It’s FREE for testers February 2013 v2.0 number 19 £ 4 ¤ 5 / Including articles by:

Transcript of TETERprofessionaltester.com/.../ProfessionalTester-February2013-Holmes.pdf · TETER SUBSCRIBE...

E s s e n t i a l f o r s o f t w a r e t e s t e r sTE TERSUBSCRIBE

It’s FREE for testers

February 2013 v2.0 number 19£ 4 ¤ 5/

Including articles by:

Krishen Kota AdminiTrack

Jim Holmes Telerik

Dinakar SankarCognizant

Chris Adlard Coverity

Steve Watson Reed Business Information

by Jim Holmes

TMI

PT - February 2013 - professionaltester.com 8

Effectiveness of testing depends upon that of incident management. It has critical roles in improving both product and pro-cess. Here I will focus on its use in project management, to answer urgent questions: What's the risk? How confident can we be? What action should we take?

If the answers are to be sufficiently accurate (and if they are not, they are much worse than useless) incident mana-gement needs quality input. Information about incidents and defects comes in various forms from multiple sources. All of it must be accepted correctly. Other contributors to this issue of PT discuss how to make incident reporting efficient and comprehensive. This article is about making it flexible and adaptable.

Form fillingMy ideas about how to raise incidents have been influenced by the chapter en-titled “Bug Management and Test Case Effectiveness” by Emily Chen and Brian

Gather comprehensively, present selectively

Good incident reporting and its advantages as seen by Jim Holmes

Incidentally…

Nitz, in Beautiful Testing (Tim Riley and Adam Goucher, eds, O'Reilly, ISBN 9780596159818).

This, other works and personal experience have led me to believe that incident re-ports should contain just the right amount of information. Stuffing them with super-fluous log entries, ancillary stack traces, and memory dumps is harmful. The extra-neous information throws readers off the true track of what has happened, why it matters and how to do whatever they need to.

Duplicating information between reports is similarly noisy, and also error-prone: references to its source are far better. For example, rather than including a failure report from a CI server, include a link to the report page for the relevant build. That way the reader can choose what from that is relevant to his and her task, rather than being guided or mis-guided by the incident report author. Furthermore, if the source information is updated, the incident report will not become out of date.

Never allow automated filling of incident report forms by automated testing, regard-less of what executes it or where. It is likely to flood the IM with false positives. Incident detection by automated testing requires human intervention before re-porting and, if reporting is necessary, human reporting.

Figure 1 shows an incident detected during integration testing reported using TeamPulse, Telerik's ALM tool. It includes a link back to the originating report, a description, and information to support the assertion that the incident occurs locally as well as on the CI server. The last is important because it confirms that the person raising the incident has vali-

9PT - February 2013 - professionaltester.com

Incidentally…

direct and decide next steps in unscripted testing, checking and investigation. When mining for ideas, it's good to keep the following in mind.

area: grouping incidents by component, feature, purpose, etc can reveal impor-

dated it and that the incident happens in more than one environment, tending to eliminate environmental causes so save investigation effort.

Some test automation tools are able to push test failure details to third-party incident management /bug tracking systems, giving a great start to incident reporting and so resolution. Figure 2 shows such a “ticket” being created in Telerik Test Studio, which can also output a zip file containing exception details, screenshots depicting expected and actual outcomes and, for web applications under test, a complete copy of the DOM at the moment of detection.

Non-functional incident reportingPerformance and load/stress testing is more complex and diverse than functional testing. Approaches, practices and tool-sets vary enormously. Often the data gathered is moderately transitory and even for a single incident is gathered from multiple supporting tools such as Windows Performance Monitor and PowerShell, shell scripts and so on. These factors make it harder to get the amount of information in an incident report right.

Perhaps the most important difference to bear in mind is what is involved in repro-duction. The investigating developer will probably not be easily able to observe the failure for his or herself. So rather than stating it in terms of unexpected output, it is important to document the unexpected observations, indicating what will be looked for during retesting to decide whether a fix has or has not achieved improvement. For this purpose the output from the tool(s) used, including screen-shots, is often helpful. Figure 3 shows HTTP responses alongside a graph of CPU utilization, with accompanying entered information, all compiled in Test Studio.

Incident reporting as input to exploratory testingClear and effective incident reports – even those which end up being closed with no defect found – are a great resource to help

Figure 2: pushing test failure details from automated testing to incident reporting

Figure 1: incident report

tant common factors and other associations

severity: while focusing on important defects, occasionally compare them with apparently less important ones which may help to complete a picture

Figure 3: performance failure report

10

age: old incidents tend to be less impactful (or they would have been investigated/fixed) and therefore less likely to lead to other important ones, but the implications of recent incidents need investigating andthat can be fruitful

churn: incidents that have been closed and reopened, especially if

PT - February 2013 - professionaltester.com

Incidentally…

Figure 5: code coverage measurements

Figure 4: mining active incident data

Figure 6: incident detected by exploratory testing

repeatedly, indicate something significant and not understood, so are a good hunting ground.

Figure 4 shows a list of active incidents displayed using TeamPulse's filtering, searching and manipulation capabilities.

Considering code coverage information as well makes this method even better. In figure 5, NCover shows poor coverage in the “providers” area of the SUT while a defect is being isolated at unit level. Based on this, a charter was drawn up to explore

integration of various types of users and payrolls. As every exploratory tester knows, the most interesting discoveries come from noticing things just outside the charter. In this case, it led to the defect – an editiable field that should be read-only – in figure 6.

Keep a high altitude viewIncident information enters your tracking system from many sources. See this for what it is: a tremendous advantage. Use it for steering as well as reporting. More data is a very good thing when it is used thoughtfully

Jim Holmes is an evangelist for Telerik. Free trials of the Telerik products mentioned are available at http://telerik.com