1 COVER 61 01 - Professional Testerprofessionaltester.com/files/PT-issue37.pdf · w Remote test...
-
Upload
nguyenduong -
Category
Documents
-
view
214 -
download
1
Transcript of 1 COVER 61 01 - Professional Testerprofessionaltester.com/files/PT-issue37.pdf · w Remote test...
E s s e n t i a l f o r s o f t w a r e t e s t e r sTE TER
June 2016 v2.0 number 37£ 4 ¤ 5/
Including articles by:
Gregory Solovey Nokia
Isabel Evans BCS SiGIST
Hans Buwulda LogiGear
Huw Price CA Technologies
Derk-Jan de Grood Valori
Test Automationfor Everyone
Seamless Integration
Any Technology
Broad Acceptance
Robust Automation
Quick ROI
RECORD MODULES
MOUSE
BROWSER
KEY
VALIDATE
USER CODE
ATTRIBUTE EQUAL
MY_METHOD
SEQUENCE
CLICK
OPEN
RECORDING_1
RECORDING_1Recording_1.cs
void ITestModule.Run() {Report.Log(ReportLevel.Info, "Website", "Opening web site 'http://www.ranorex. Report.Log(ReportLevel.Info, "Mouse", "Mouse Le� Click at {X=10,Y=20}.", new RecordItemIndex(1)); Report.Log(ReportLevel.Info, "Keyboard", "Key sequence 'admin'.", new RecordItemIndex(2));
License1
All Technologies.All Updates.
www.ranorex.com/try-now
New version out now
Remote test executionGit integrationPerformance improvementsFresh user interfaceCode templatesand so much more...
TE TER The value of testing
3
From the editor
We aim to promote editorial independenceand free debate: views expressed by contributors are not necessarily those of the editor nor of the proprietors.© Professional Tester Inc 2015 All rights reserved. No part of this publication may be reproduced in any form without prior written permission. ‘Professional Tester’ is a trademark of Professional Tester Inc.
4
Professional Tester is published by Professional Tester Inc
9
12
We all appreciate that the world is connected.
In fact, it's been suggested that some 30 billion
'autonomous things' will be attached to the
Internet by 2020 and with the pressure to
move at greater speed, there is little wonder
that application delivery is a challenge.
Organizations identify that the biggest
stumbling block is establishing how to take a
consistent approach to testing across multiple
channels of engagement. Whilst the broad
brush understanding exists that testing should
be an early consideration, not an afterthought,
there still needs to be a commitment from
senior managers who need to understand the
value of involving testing teams at the outset –
if applications are to deliver for business.
Derk-Jan de Grood argues that testers
also need to challenge and ask 'What are
you trying to achieve?' If testers are to
question the 'what', we haven't ignored
the 'how' and contributors Huw Price, Hans
Buwalda and Gregory Solovey each set out
practical insights into how it is possible to
deliver more consistently.
We hope it provides plenty of food for thought
and as ever, your feedback and views are
welcome.
Vanessa Howard
Editor
IN THIS ISSUE
A test mature organizationEver wondered how organizations know when they have achieved
test maturity? Let Gregory Solovey guide you.
Make yourself heard BCS SIGiST is offering mentoring for testers looking to improve
their presentation skills and here Isabel Evans outlines why these
skills matter.
Test design driven automation Hans Buwalda sets out his view on the prerequisites for
achieving automation success.
Meeting the challenge of API testingHuw Price argues that the ubiquity of APIs makes the need for
rigorous model-based testing more pressing than ever.
Testing Talk In this issue's Testing Talk, Anthea Whelan talks to
Derk-Jan de Grood about why testing needs to be
about far more than bug hunting.
PT - June 2016 - professionaltester.com
17
Gregory Solovey
Isabel Evans
Hans Buwulda
Huw Price
Derk-Jan de Grood
Editorial Board Professional Tester would like to extend its thanks to Lars Hoffmann, Gregory Solovey and Dorothy Graham. They have provided invaluable feedback reviewing contributions.
22
EditorVanessa Howard
Managing DirectorNiels Valkering
Art DirectorChristiaan van Heest
PublisherJerome H. Mol
Contact
by Gregory Solovey
A test mature organization
PT - June 2016 - professionaltester.com 4
Test strategy
Every software organization aims to achieve zero implementation defects. There is no silver bullet (or “golden” practice) that makes this possible. What, then, is the set of necessary and sufficient practices?
The Test Maturity Model (TMM) is a very accurate way to categorize test-related practices and methodically guide organi-zations through the sequence of steps required to achieve test maturity. There are five levels in TMM, as seen in Figure 1:
How do organizations know when they have achieved test maturity? Here's a useful guide.
One way or another, all these practices have to be in use in a mature test organization.
Initial, Definition, Integration, Management and Measurement, and Optimization.
This article is an attempt to describe the specifics of each stage, what challenges you can expect and when it is reasonable to consider a level “reached”.
Level 1 - InitialThe tests are executed manually and typically two types of tools are used at this stage - a test management system (TMS) and a change request system (CR).
The first challenge is to identify the objects to test. Modern software is multilayered, with independent services on each layer. For example, drivers, middleware, application, GUI. The layers communicate with each other through APIs, messages, DB, etc. The question is how will it be possible to test an intermediate layer as an independent product? Can this be done through a higher level application, through a simulator of the higher level application, or directly through the APIs? Should we test it using the GUI to simulate the end user scenarios, or through the “rest-less” APIs to make the test stable and independent from the GUI implementation? There is no definitive answer, each approach has its pros and cons.
The second challenge is to implement the object's controllability and observability, which allow for the execution of all test cases. In other words, when the test interface is defined, it is necessary to analyze that each test case can be executed: can a test stimulus be initiated from an external interface and can the test response be captured at the external interface? If this is not the case, a test harness has to be implemented, in the form of additional GUI objects or CLI commands, in order to expose the APIs, messages, states, attributes, etc.
5PT - June 2016 - professionaltester.com
Test strategy
Initial• Manual test; test management system; CR
management system
Managed• Stand-alone test automation frameworks for unit,
functional and performance test tools
Integration• Continuous integration test framework for continuous
deliveries and dynamic test environments
Quantitative• Quality monitors and metrics dashboards; TDD/BDD,
requirements, code, errors coverage
Optimizing
• Continual process improvement through review, audits and RCAs
Figure1: The five levels of the TMM
Initial
Managed
Integration
Quantitative
Optimizing
Figure 2: Tests are executed manually, in a stand-alone environment
1
Software update
Test application
1
Test application
5
Software update
Test application
1
Test application
2
Test environment 1
Test environment N
„
tracking system, for example Godzilla.
Level 2 – DefinitionIt is test automation time. Test tools have to be selected based on the available interfaces - WEB, GUI, CLI, mobile, DB; or based on the operating system and the development environment: Windows vs. Linux; NET vs. Java vs. SOAP, etc. Today the tendency is toward open source test tools vs. commercial or internally developed ones. What is important is that each has a test framework, as shown in Figure 3.
A test framework provides two main benefits to the user:
1.Transform the test development process from coding toward “declaring” tests. The test framework provides a domain-specific language (DSL) to a tester, which supports a hierarchical test struc-ture and uses “verbs” (keywords) such as: set, send, repeat, receive, compare, tear down. Each keyword, as a function, accepts parameters and is supported by the “driver” that does the real job. The set of commonly used keywords is organized in libraries. The test development then shifts to setting a sequence of keywords. A framework also provides an IDE for the testers to select and order keywords, run tests and analyze results.
2. Make the testware change-tolerant with respect to changes in requirements, API syntax, GUI appearance, CLI para-meters, file locations, security attributes, etc. This is achieved with various libraries, which maintain the identity of the CLI command syntax, API's parameters and GUI objects. A test framework supports the testware organization through a collection of configuration files, test suites, test cases, libraries of drivers and keywords.
This level is achieved when the following criteria are met:
„ A framework shifts the testers' focus from writing new test scripts to declaring tests keywords.
All found errors are registered in a defect
The third challenge is to establish a test hierarchy according to the document hierarchy. The tests have to be a mirror of the requirements/architecture/ specification/interface documents.
This level is achieved when the following criteria are met:
„ Stable manual tests exist for each system document.
„ All test cases can be executed com-pletely through the external interfaces.
„ Test cases are stored in a test management system, for example Quality Center.
PT - June 2016 - professionaltester.com 6
Test strategy
„
allowing for easy error identification and creation of “defect trackers”.
„ The testware is organized in a fashion that allows for unique changes for any single modification of the object-to-test.
Level 3 – IntegrationContinuous integration, continuous delivery, continuous deployment are buzzwords today in software engineering. Software is built numerous times per day for multiple changes, various releases, features and applications. It has to be tested immediately by various test tools and in sophisticated test environments
The debug file structure is standardized,
that include permutations of browsers, operating systems, device variants, and so on, as seen in Figure 4.
The challenges go beyond the test automation framework: the test environ-ments have to be dynamically created, results from several test tools have to be presented in standardized form, the me-chanisms of build acceptance have to be defined, etc. Here are a few more features related to the continuous integration test framework: web-based monitors to keep track of all critical aspects, automated tools to correct and mitigate results of test fai-lures, recovery from crashes and applica-tions failures, automatic test reruns, auto-matic test results backup and maintenance.
Upon build completion the continuous integration test framework should auto-matically initiate parallel tests in all official environments. If errors are discovered, the test team immediately takes action to identify the source of the errors, distinguish code errors from environment and testware faults, and makes a determination about the “fate” of the build.
This level is achieved when the following criteria are met:
„ A means exists to describe/define the release data within the framework: veri-fication requests, resources, test environ-ment, applications and associated tests.
„ A consistent framework debug file structure is defined for all test tools, that references the test description and specifies the details of the “verdict” creation.
„ There are framework features that allow
to re-run particular failed tests, filter environment errors from the code-related ones, manually update the results, regenerate metrics/e-mails.
Level 4 - Management and Measurement (Quantitative) At this level automated test cases can be easily added to the continuous integration solution. As a result, at some point, the
Initial
Managed
Integration
Quantitative
Optimizing
Figure 4: Allowing for tests in multiple environments and multiple builds simultaneously.
1
2
3
Software update
Test application
1
Test application
5
Software update
Test application
1
Test application
2
Test environment 1
Test environment N
Continuous Integration Test Framework
build
Initial
Managed
Integration
Quantitative
Optimizing
Figure 3: Tests are executed automatically in a stand-alone environment
1
2
Software update
Test application
1
Test application
5
Software update
Test application
1
Test application
2
Test environment 1
Test environment N
Test Tool/ Framework Test Tool/ Framework
7PT - June 2016 - professionaltester.com
Test strategy
number of test cases and the time to run them can become unmanageable. The challenge now becomes to limit this number, by creating a minimum set of needed test cases. There are a few ways to measure the necessary number of test cases, via the coverage of various properties of the object to test: requirements and architecture coverage, code coverage, error coverage (in APIs, messages, states, models). Unfortunately, none is perfect and sufficient by itself; all have to be used to some degree.
„ Requirements-architecture coverage (traceability). Obviously this is a necessary, but not sufficient condition. Each requirement has to always be tested by more than one test case. A bi-directional traceability tool traces test cases from the test management system to the requirements documents and vice-versa.
„ Code coverage. Test cases have to cover all lines of code, but again, this is not a sufficient condition. Each line of the code has to be tested, but sometimes more than one test case is required. A code coverage tool is particularly useful for developer unit test.
„ Error coverage. It is assumed that all requirements and architecture items are formally described by UML-like models such as state machines, conditions,
algorithms, ladder diagrams, syntax, etc. In this case there are formal methods for test design that guarantee the minimum set of test cases that covers all implementation errors. Unfortunately, there are no satisfactory solutions for automatic software test generation (it only exists in the hardware world). Therefore, in order to control this process, testers need to be trained and must verify the test quality during the review meeting.
For a complete picture of this level, a few more measurements have to be maintained for each software build: server/application performance benchmarks (memory allocation, processes utilization) and load, scalability benchmarks (throughput, delays, latency).
To make these measurements work they should be presented as a set of dashboards for various entities: releases, features, software components, and so on, as shown in Figure 5. In this case the focus of the project's review meetings would shift from discussing “where we are” to “what should be done in order to be where we want to be”.
This level is achieved when the following criteria are met:
„ Dashboards are in place to measure the test quality; the weak spots can be easily
Initial
Managed
Integration
Quantitative
Optimizing
Figure 5: Dashboards monitor the quality of tests
1
2
3
4
Software update
Test application
1
Test application
5
Software update
Test application
1
Test application
2
Test environment 1
Test environment N
Test
Monitoring Dashboards
build
Test density
Codecoverage
Reqcoverage
CR density
Feature_1
Feature_3
Feature_5
Gregory Solovey PhD, is a distinguished member of technical staff at Nokia and frequent contributor to Professional Tester.
PT - June 2016 - professionaltester.com 8
Test strategy
identified, indicating where necessary adjustments are needed.
„ Performance and scalability benchmarks are taken and verified for each build.
Level 5 – OptimizationAll previous levels provide reactive “test challenge – response” mechanisms. The specific of the last TMM level is to prevent errors, by establishing fault tolerance mechanisms. There are a few ways to implement this level, for example by writing test-friendly documentation and test-friendly code.
Here is a list of the requirements for supporting test-driven requirements: „ Transform the requirements/architecture
from a business format into a hierarchy of formal software models: conditions, algorithms, state machines, instruction sets, message diagrams, syntax.
„ Define tags for each requirement/architecture item for traceability and testability.
„ Establish traceability of testware to documents (requirements, architecture, design, interface specification).
„ Review requirements, architecture, design, interface documents for testability.
Here is a possible list of requirements for supporting test-driven code:
„ Provide access to all object APIs and messaging interfaces with other subsystems.
„ Provide access to the object states and attributes values and return them in TLV format.
„ Report all errors/warnings with a predefined format.
„ Redirect system messages to external interfaces.
There is only one mechanism to implement these rules: promote them to guidelines and then verify the compliance of the documents and code during the respective reviews. Review metrics should be uploaded to the dashboard. The audit
Initial
Managed
Integration
Quantitative
Optimizing
Figure 6: Dashboards monitor the process of error prevention
1
2
3
4
5
Software update
Test application
1
Test application
5
Software update
Test application
1
Test application
2
Test environment 1
Test environment N
Test
Prevention Dashboards
Monitoring Dashboards
build
Req Testability
Test Consistency
Test Harness
CR Lessons
Feature_1
Feature_3
Feature_5
system pulls data and applies specific constraints that result in possible violation notifications to the respective parties. The results of these audits are presented as a set of prevention dashboards, as set out in Figure 6.
This level is achieved when no implementation errors seep into the production code.
Conclusion This article shows a specific implementation of the five TMM stages. It is an attempt to keep the description generic, assuming that similar approaches can be used in any software company. The sequence of practices implementation and their specifics depend on a company's test culture and its weakest areas. However, in one way or another, all these practices have to be in use in a mature test organization
by Isabel Evans
Make yourself heard
Jerry Seinfeld once spoke about that oft-quoted 'fact' that people fear public speaking more than they fear death. He observed: “Death is number two. Does that seem right? That means to the average person, if you have to go to a funeral, you're better off in the casket than doing the eulogy.”
Whilst the hope is that addressing an audience must be preferable to a meeting with the Grim Reaper, it probably is fair to say that public speaking does cause many, if not all of us, some anxiety and trepidation.
Yet at the same time, we also recognize that identifying an issue, arriving at a
BCS SIGiST is offering a place to four testers on its 'New Speakers Mentoring Scheme'
As we progress in our careers, the ability to get our message across by speaking to a group becomes even more important.
solution and convincing peers and line-managers that a course of action can and should be followed is an essential skill in business today.
All too often, we hear that if testers are to add value, we need to break out of our 'silo' and question what is being tested. Agile demands ongoing dialogue and collective problem solving and so sound commu-nication skills are becoming as vital as a tester's ability to be analytical.
The BCS SIGiST has a mission to help the development of testers in their careers, and that includes helping provide a platform for those who want to improve their public speaking. And so we are offering up to four new or improving speakers, based in the UK, the chance to be mentored to improve their presentation skills.
Being able to present well is important for all testers, not just those people who want to share their stories with a wide audience. In the course of our working lives we will often need to be able to put a point across clearly at a meeting, argue a case, persuade others, and share information in an engaging way.
Spoken rather than written reports are often more useful and appropriate in projects. Our reports at stand-ups and progress meetings need to be succinct and worth listening to, if we are to engage our colleagues, customers and stakeholders. We often have messages that are difficult to get across or hard for people to hear. We need to be able to take questions and answer them. As we progress in our careers, the ability to get our message across by speaking to a group becomes even more important.
Speaking at the SIGiST, especially as part of the New Speaker Mentoring Scheme, is
9PT - June 2016 - professionaltester.com
Feature
BCS SIGiST on Wednesday 7th December 2016.
What can I expect from my mentor?You will be matched by one of four world class testing experts and speakers who will:
„ advise how to make a really appealing abstract to submit
„ guide you in preparing your submission, explain the presenting technology
„ review and help you rehearse your presentation
„ introduce you when you speak at the BiCS SIGiST conference in London
So your mentor will provide advice, review comments and discuss your ideas with you during this time, but you are responsible for your content and delivery.
I'm not based in London; can you help me with travel?Ask your company to pay your expenses as part of your professional development. If that is not possible, discuss with the Programme Secretary as we pay expenses in some circumstances. Remember, we are looking for UK-based speakers in this scheme.
When and where is the conference? Wednesday 7th December 2016, BCS Offices, Davidson Building 5 Southampton Street London WC2E 7HA
an opportunity to improve your working practices around preparedness, spon-taneity in response to unexpected questions, ability to make a case coherently, and spoken communication, providing you with enhanced work and life skills.
We will pair UK-based applicants with mentors during the year to prepare them for a short speaking slot at the BCS December conference. Our mentors are Julian Harty, Graham Thomas, Mieke Gevers and Dorothy Graham – all sea-soned presenters who are committed to helping nurture talent.
The theme for the conference day is Challenge Yourself and we want you to tell us about a testing challenge you have faced, how you overcame, or – even more interesting – how you failed and the lessons you learned.
Here's a summary for Professional Tester readers of how to apply.
How do I apply?Download the application form (http://www.bcs.org/category/18795) and send it to the BCS Programme Secretary, Isabel Evans (her email address is on the form) in an email titled “SIGiST New Speakers”.
When do I need to do this? Do it now! The deadline is 31st July 2016.
What is the process the SIGiST will use to select the successful applicants?The Programme Secretary and Mentors will review the applications, select four and assign each one to a Mentor.
Remember, what we are looking for at this stage is your idea for a presentation. Write that in the abstract. Then fill in the key points you wish to highlight. These don't need to be perfect yet.
What happens then?You will provide an improved abstract submission by the end of August, and you will present a 10 to 15-minute talk at the
Feature
PT - June 2016 - professionaltester.com 10
Dorothy Graham www.dorothygraham.co.uk
Graham Thomas www.badgerscroft.com
Mieke Gevers http://bit.ly/23gbi2O
Julian Harty http://bit.ly/1WbFQRo
Isabel Evans, is the BCS SIGiST programme secretary. She has more than 30 years’ experience in the IT industry, mainly in quality management, testing, training and documentation.
Find out more about BCS SiGIST mentors
If IT quality matters to you, you need Professional Tester, the original and best journal for software testers.
Read the latest testing news and articles and subscribe to the digital magazine free at professionaltester.com
Professional Tester has led knowledge-sharing, commentary and opinion about testing methods, techniques and innovation since 1999
Be part of the conversation
Subscribe. Contribute. Advertise.
software testing
E s s e n t i a l f o r s o f t w a r e t e s t e r sTE TER
by Hans Buwalda
Test design driven automation
PT - June 2016 - professionaltester.com 12
Test automation
Automated testing has never been more important, and is gradually developing from a nice-to-have into a must-have, particularly with the influence of DevOps.
In essence, DevOps means that the de-ployment process gets engineered just as the system is being deployed itself. This allows for rebuilding and redeploying, in one form or another, to happen whenever there are changes in the code. It is similar to the "make" process known in the UNIX world, where parts of a build process can be repeated when code files change. A major aid in that process is when tests can be created and automatically executed, meaning that the new deployment will work ok.
LogiGear's Hans Buwalda sets out his view on the prerequisites for achieving automation success.
When I start a project my first question isn't "what do I build?" but "how do I test it?"
For the lowest level of testing, the unit tests, this is not arduous. Unit tests are essentially functions that one-by-one test the other functions (methods) in a system. At one step higher, component tests can verify the methods exposed by a compo-nent, usually without having to worry about the UI of system under test. Similarly, REST services can easily be accessed by tests. In all these cases the automation of the tests is intrinsic, and such tests are usually not that sensitive to changes in the target system.
However, higher level tests like functional and integration tests can be more cumber-some, particularly if they have to work through the UI. It is this category that this article will address.
A very powerful approach to testing is known as "exploratory testing", a term coined by Cem Kaner, and further developed by James and John Bach, Michael Bolton and others. However, exploratory testing is not intended as a starting point for automation. It is meant as a "testing by learning" approach. Testers, preferably in groups of two and following a script, will exercise a system, to get to know it and in that way identify potential issues. The strength in exploratory testing is the ability to find unanticipated issues in an application, something that is harder to do with tests that are prepared and automated in advance, which I’ll revisit later in this article. For more information about exploratory testing, you can visit Satisfice.com. Figure 1 outlines the three major test categories and their suitability for automation.
The role of test designIn most project testing, automation is seen as an activity, a chore that needs to done, but is not particularly inspirational. The focus is then usually on the technical
Test automation
to read, and as a result, even harder to maintain or manage. When tests look like this, one should not expect an automation engineer to produce well-structured and maintainable automation for them, it’s not feasible.
The example in Figure 3, using a BDD ‘Given-When-Then’ (GWT) format, also comes from a real project. It shows how using a strong format for commu-nication, GWT, will not help much if it
is not used very well:To improve the way tests are written we have to look at the test developers first. Once they are on track producing well-organized tests, the automation engineer will have an easier time with the technical side of the automation.
ActionsA first step to get testers to effectively work with everybody else is to use a format that is both easy to read and easy to automate. We will use "actions" for that, which are keywords with arguments. However, it is also straightforward to translate these into GWT scenarios.
In our tool, TestArchitect we put our tests in a spreadsheet format. This is not es-sential to success, but a big benefit is that it makes the tests accessible for non- technical people, like functional testers, domain experts, auditors, etc.
Notice in Figure 4 how well this test can be understood without having to know what the details are of the UI of the application under test. It could be web based, client server, legacy mainframe, or mobile, for this test it doesn't matter. This makes tests very resilient against changes in the application that do not change the business logic of this scenario.
Test modulesThe Action Based Testing approach regards tests as products that have value in their own right. The tests are built as series of keyword-driven actions and organized in products called test modules. A crucial step is the definition of these modules. For example, there is a sepa-ration between "business tests" and "interaction tests", meaning tests of business transactions are kept in other modules than tests of the UI interaction with a user. The underlying notion is that the tester creating the test design has a bigger impact on the automation success than the technical automation engineer.
In Figure 5, notice that the first step is identifying the test modules. Think of them
activity of scripting often lukewarm test cases. The result can be a fairly trivial test set that, due to poor structure, is also hard to maintain.
Figure 2 shows an example, adapted from an actual test project:
As you can see, this test is not easy to digest. There are many detailed steps and checks, and there is no scope for them. The effect is that the test is hard
13PT - June 2016 - professionaltester.com
Unit Testing
Close relationship
with the codeSingular test
scope, but deep
into the code
Fully automated
by nature
Scalable, grows
with the code,
easy to repeat
Functional Testing
Usually does not
have a one-on-
one relation with
code
Quality and
scope depends
on test design
In particular UI
based
automation can
be a challenge
Often a bottle-
neck in scalability
Exploratory Testing
Human driven,
not seeking a
relation with code
Usually deep and
thorough, good
at finding
problems
May or may not
be automated
afterwards
Not meant to be
repeatable.
Rather do a new
session
Relation to code Quality / depth Automation Scalability
Figure 1: Matrix
step 16 Open http://www.bigstore.com The "BIG Store" main page is displayed, with a "sign in" link
step 17 Click on "Sign In", upper right corner A sign in dialog shows, the "Sign in" button is disabled
step 18 Enter "johnd" in the user name field The "Sign In" button is still disabled
step 19 Enter "bigtester" in the password field Now the "Sign In" button is enabled
step 20 Click on the "Sign in" button The page now shows "Hello John" in the upper right corner
step 21 Enter "acme watch" in the search field The "Search" button is enabled
step 22 Click on the "Search" button 5 watches of Acme Corporation are displayed
step 23 Double click on "Acme Super Watch 2" The details page of the Acme Super Watch 2 is displayed
step 24 Verify the picture of the watch The picture should show a black Acme Super Watch 2
step 25 Select "red" in the "Color" dropdown list The picture now shows a red Acme Super Watch 2
step 26 Type 2 in the "Order quantity" textbox The price in the right shows "$79.00 + Free Shipping"
step 27 Click on "Add to cart" The status panel shows "Acme Super Watch 2" added
step 28 Click on "Check out" The Cart Check Out open, with the 2 Acme Super Watches
Figure 2: Steps
Figure 3: A strong communications format can still result in poor usability
Given User turns off Password required option for Drive Test
And User has logged in by Traffic Applicant account
And User is at the Assessments Take a Test page
And User clicks the Traffic Test link
And User clicks the Next button
And User clicks the Sheet radio button in Mode page if displayed
And User clicks the Start button
And User waits for test start
And User clicks the Stop Test button
When User clicks the Confirm Stop Test button
And User enters the correct applicant password
And User clicks the Confirm Stop Test button
Then The Test is Over should be displayed in the Message label
And the value of the Message label should be The test is over
And The Welcome to Traffic Testing page should be displayed
Business objects and business flowsHaving a clear modularized test design will greatly assist in the managing and maintaining automated tests. The identified modules work like buckets that you can create your test cases into. However, it is not always easy to decide where to start. We have some good experience with using "business objects" and "business flows" as a starting point. In this approach, you would:
„ identify the business objects with which your application works
„ also identify a number of business flows,
end-to-end transactions to involve multiple business objects
as chapters in a book. Once you have defined them, the rest is a matter of putting the text in the right chapters. Structuring tests this way gives them a good scope, which in turn helps when deciding what actions to use and what checks to create. For example, some test modules are "interaction tests", which test whether the user can interact with the application, while other modules take a "business test" perspective. A key requirement for success is to keep these two kinds in a separate test module. It should not happen that a business level test, like checking the billing amount of a car rental as shown in Figure 5, contains navigation details, like a "select menu item" action.
Test automation
PT - June 2016 - professionaltester.com 14
acc nr first last
123123 John Doe
acc nr amount
123123 10,11
123123 20,22
acc nr expected
123123 30,33
open account
deposit
deposit
check balance
Figure 4: Open Account
High Level Test Design - Test Development Plan
interaction test business test
define the "chapters"
create the "chapters"
create the "words"
make the words work
window control value
enter log in user name jdoe
enter log in password car guy
window control property expected
check property log in ok button enabled true
user password
log in jdoe car guy
first last brand model
enter rental Mary Renter Ford Escape
last total
check bill Renter 140.42
Objectives
Test Module 1
Test Cases
Objectives
Test Module 2
Test Cases
Objectives
Test Module N
Test Cases
Actions
AUTOMATION
Figure 5: Overview test modules approach
„
that don't fit the above two buckets
For example, an e-commerce site might have the following:
„ business objects: articles, customers, staff members, promotions, payments
„ business flows: place/fulfill/pay an order, introduce a new article and sell
„ other tests: - authorizations- extensibility- interoperability
Within each of these categories you can have business tests and interaction tests. For business objects the business tests will do various life cycle operations, like creation, modification, and closure. The interaction tests will look at the dialogs involved, and test details such as; “does a drop down box have the correct values?”
In Figure 6, the examples show tests around "promotions". There will be many kinds, like percentage or fixed cash, overall or per article/country etc., and fixed time period or perpetual.We will discuss the action "time travel" later in this article.
have additional categories for the tests These could typically be interaction tests and in Figure 7, the test (fragment) verifies whether the Dutch town (municipality) "Tietjerksteradeel" is in the list of towns.
TestabilityTest design is, in my view, the most important driver for automation success. However, following closely behind is "testability": is the application under test prepared for testing, in particular auto-mated testing? Testability should be a high priority requirement. When I start a project my first question isn’t "what do I build?" but "how do I test it?"
Surprisingly, many applications are not very testable. LogiGear has several game companies as customers, and in more than one instance, the only access to a game is the graphical display of it. We have experience in image-based testing, and were able to handle the tests, but it requires a great deal of effort and the result is ultimately hard to maintain.
Testability starts with good system design. If a system has clear components, services, and tiers, there will be many ways tests can access it, to set up situations and verify outcomes. That access can be UI and non-UI, depending on the scope of the tests.
Test automation
15PT - June 2016 - professionaltester.com
name start finish percentage category
create promotion christmas 1-12-2016 25-12-2016 5 all
create promotion tablets 20-12-2016 1-1-2017 11 tablets
date
time travel 23-12-2016
article price
check nett price iPad Mini 4 338,19
Figure 6: Promotion 1
window control
click main new promotion
window type
select promotion town
window list value
check list item exists promotion towns Tietjerksteradeel
Figure 7: Promotion 2
A top priority testability item is the iden-tification of UI elements, like controls in a desktop application, or HTML elements on a web page. UI elements are typically identified by test tools using their proper-ties, and changes in those properties is a big source of problems for test automation. If a button is identified with a user visible caption "Submit", and in a subsequent version of that application that caption is changed to "OK", the tests won't be able to find the button anymore and will stop working. However, it should be noted that virtually all desktop platforms like Java/ Swing or WinForms allow controls to have a hidden "name" property, which is easy to define by a developer, and thoroughly solves the identification problem for the testers.
An equally high priority testability requirement is timing. There are many cases where, for example, a table on a screen is populating with values, and a test has to wait until that process has finished before verifying a single cell value. Quite often the tester will not have a clear criterion that the test can wait for, and will use some arbitrary hard-code waiting time. This typically means that the test slows down if the waiting time is too high, or will break if it is too short. This gets worse when the tests are executed on virtual machines, which is becoming standard practice. We have several large projects that run tests for hours or even days in a row, and occasionally break on time-outs. It is however usually very straightforward for a developer to offer a criterion that the test can wait for, like a dedicated property of the earlier mentioned table control.
Apart from timing hooks other "white box" access to the internals of a system can be helpful as well. For example, the graphical game can provide features to let the test know which objects (like "monsters") are on the screen and where they are, so that in the case of monsters the test can "shoot" them. Another example is the graphics we test for a geophysical application. For those tests that want to verify the numbers being displayed it is much easier to get these numbers via an
API call then to have to reverse-engineer them from an image.
Team work In addition to test design and testability, the cooperation between various players is a major driver for testing and automation success. The reverse is also true: if tests are comprehensive, efficient and run stably, it is of great help to the rest of the develop-ment and delivery processes. In particular DevOps style processes will run smoothly.
For agile projects, the team is the first place where co-operation will and should take place. The tester and automation engineering roles can work with the product owners and the developers. Often tests can even give direction for the rest of the project, but I would argue that is a good benefit but not their primary role (except maybe for unit tests).
The testers in the team should start the sprint focusing on the higher level test modules first, which should ideally be at a similar level as the incoming sprint backlog items like user stories and
Hans Buwalda leads LogiGear's research and development of test automation solutions, and oversees the delivery of advanced test automation consulting and engineering services. He is also the original architect of the key-word framework for software testing organizations.
acceptance criteria. Since in a typical sprint the UI details are not known or stable yet, testers won't be able to write interaction details in the test, which I consider a good thing. Those come later. Also in the early phase, the testers should also discuss with the developers what UI items will be created, and what their internal identifying properties are going to be. The tester can then manually create and main-tain in the interface mapping, thus elimi-nating a large part of the automation work. When the sprint continues the actions used in the initial test modules can be automated, and more detailed interaction tests can be created as well, in their own modules.
SummaryAutomation has many facets. It is often seen as a must-have, but it does not get the priority it deserves. Working with an overall test design, and having applications developed in a testable way can be a great help. It all comes together with the attention and cooperation of all involved, which can make automated testing a success, and as a result help the success of the project overall
Test automation
PT - June 2016 - professionaltester.com 16
by Huw Price
Meeting the challenge of API testing
APIs are by no means new, and componentizing is a fundamental of good programming. However, nowadays practically every organization is cashing in on the business value of making sections of code readily available to other appli-cations. These range from new customer channels to better integration and visibility, and this rise to ubiquity makes the need to rigorously test APIs more pressing than ever.
At first, the challenges of API testing look similar to those of testing complex chains of applications end-to-end, where the job of the tester is to understand the causes across applications which led to an eventual result. This, however, is often considered the most difficult aspect of API testing, and trying to get all of the moving parts in the correct state at the right time is sometimes considered almost impossible.
Why a rigorous and model-based approach is needed in API testing
The design should be changed to make it possible to design or “observe” the reason for a result.
Some of the reasons for the complexity in API testing are set out in this article.
The growth of API use means that testers must accept the development practices associated with them, recog-nizing that a legacy approach has failed. The central tenant of this article will be that testers should adopt the role of “critical modeller”, and should strongly influence the design and implementation of APIs. They should be involved far earlier, and this is necessary to achieve sufficiently rigorous testing when faced with the complexity of APIs.
Current API testing methods Most API testing is currently performed by some kind of test harness which has been created by hand, for example by manually writing scripts in Python or Javascript to trigger an API. There are some tools that can automatically create basic tests from a protocol but these tests are generally primitive. Other tools can track traffic and replay this in testing frameworks later. Typically, neither method leads to rigorous testing and the unsystematic approach to test creation prevents measurability and these methods cannot, therefore, provide estimates of either risk or test coverage.
In order to sufficiently test APIs which change frequently, these scripts should be created automatically as part of a controlled test automation framework. A model-based approach can introduce the required rigor, while newly available and scalable testing frameworks mean that testing can keep up with the rate of change. This approach will be set out later in this feature.
Observability: is the API actually testable?The first question to ask is: ‘Is the API testable?’ Though much research has
17PT - June 2016 - professionaltester.com
Test strategy
thought. For instance, two defects might cancel each other out, producing a false positive. This lack of observability in testing then leads to frustration when fixing bugs unearths a myriad of others and development feels like it is moving backwards.
This is one reason for involving testers earlier, in the actual design and imple-mentation of APIs. The design should be changed to make it possible to design or “observe” the reason for a result. In practice, this can mean leaving “breadcrumbs” such as probe points or data that can be used in testing to verify that the right result was achieved for the right reason.
A good example of this is found in car management systems, where probes are used to identify exactly where a system is failing. The difficulty, however, is getting the correct number of probes in the right place. APIs are no different, and a simple audit log or more detailed return message can be hugely beneficial when testing the results from an API.
The need to overcome complexity in API testingA further case for involving testers as early as possible and for model-based testing as well, is to overcome the complexity of APIs.
Broadly speaking, there are two types of test cases which must be covered sufficiently:
„ Positive Testing – here, the process is defined clearly and the tester ensures that all decisions and function points have been “covered” by the designed test cases.
„ Negative Testing – here, test cases define the edge cases which should be rejected and simulate the data scenarios to ensure that the API rejects them correctly.
Numerous factors can cause the number of possible test cases across both cate-gories to grow far beyond the capability of manual test case design, and this challenge needs to be overcome.
APIs calling other APIsMost APIs are assemblies of calls to other APIs, each with their own rules which can create unexpected results. The complexity of an API can, therefore, grow exponentially as it is combined with other API calls,
been done around the testability of systems (see Richard Bender), it is rarely implemented within traditional IT engineering disciplines. As a consequence, testers often get a result for the wrong reason, where the cause of the eventual event was different from what they
PT - June 2016 - professionaltester.com 18
Test strategy
Figure 1: Version dependency mapping of three APIs, with constraints restricting which versions can occur together.
Figure 2: The logic of a customer API
be hundreds of times more complicated than posting the initial transaction.
Versioning of APIsVersioning is a further cause of growing complexity in API testing. Most systems have a degree of deprecation, so an API must be able to handle an old version calling new versions, or a combination thereof. The API must recognize missing values and assign some kind of default to allow the old version to work. What’s more, it might be the case that some versions can be called by some versions but not others, and the numerous possible combinations must therefore be tested.
The model in Figure 1 is a dependency map of three APIs, with some logic between them. There are multiple versions of each API, some of which can work with others. Without considering different versions of the API, there are 127 combinations which need to be tested. However, when the versioning is overlaid, there are 10,287 possible combinations which need to be tested, and it is not likely that manual scripting will cover a sufficient proportion of these.
Ordering of APIsThe calling order of APIs must also be factored into test case design, and this can further cause the number of possible test cases to skyrocket. Usually one API must be called before another, and so on, but in rare instances APIs can be called in random sequences too. Here, the number of combinations will be huge, and the only real way to test the combinations is to consider each in isolation, making sure that you test each trigger and each effect separately.
Introducing rigour with model-based testingConsidering the above factors, it will be almost impossible to exhaustively test every combination involved in a given set of APIs using typical approaches to API testing. Instead, testing must be realistic and proportionate, as well as automated and systematic. This requires the ability
19PT - June 2016 - professionaltester.com
Test strategy
test_name ActionCustomerNumber NewPayload Payload expected_results
Invalid1 Change 1 Names=James walker Name=Huw Price API fails
InvalidParm1 Change 1 Name=? Name=Huw Price API fails
Valid1 Change 1 Names=James walker Names=James walker Payload Displayed
Invalid2 Change 1 Names=James walker Name=Huw Price API fails
InvalidParm2 Change Name=Huw Price API fails
Invalid3 Lookup 1 Names=James walker API fails
Valid2 Lookup 1 Names=James walker Payload Displayed
Invalid4 Lookup API fails
InvalidParm3 Create Name=? Invalid Payload;API fails
Valid3 Create 14 Names=James walker Name=Josh Taylor NewCustomerID;Payload Displayed
Invalid5 Create 15 Names=James walker NewCustomerID;API fails
InvalidParm4 Create 1 Name=Josh Taylor API fails
and this is a particularly acute problem for manual, unsystematic test design.
APIs and Units of Work: Testing rollbacks and cleanupsWith multi-tier architecture, testing must further cope with a unit of work which spans multiple APIs. Whereas previously it might have been possible to roll a system back relatively easily to a point before a failure, with multiple APIs separate rollback and cleanup processes might be needed. Each cleanup itself will need rigorous testing and it is not an exaggeration to say that testing a failure and rollback can
Figure 3: The twelve logical combinations derived from the model in Figure 2.
Figure 4: Subflows have been created to designate what needs virtualizing, and the process behind the virtualization. These subflows (orange) are then incorporated under the master process, ready for testing.
to prioritize tests in a systematic way, and model-based testing offers a highly structured approach to overcome the complexity of API testing.
Test categorizationAs exhaustive testing will rarely be possible, you should first break down test types based on units of work and the availability of machine power. For example, if you have enough virtual machines you could consider continually exhaustively testing some APIs, especially if they are critical components called by many other processes, before connecting APIs up into different categories of tests. The types of testing can be loosely categorized as:
„ Exhaustively testing a single API – If the
number of combinations is reasonable then consider testing all possible scenarios, as is set out in the next section.
„ Multi-version testing – If there are mul-tiple versions of an API being called or available, then they should be modelled and their dependencies should be tested to ensure cross version stability.
„ Order-sensitive API testing – The order each API is called in must be modelled and understood and tests designed to cover an appropriate level of combinations.
PT - June 2016 - professionaltester.com 20
Test strategy
Figure 5: The subprocess has been incorporated into a test case to test a path through the master flow.
Figure 6: The basic test modelled above now has optional Virtual Test control added
„
considered a specific set of tests, i.e. test the successful rollback of a unit of work to a “data safe” state.
„ Chain testing – The linking of APIs together.
Exhaustively testing an individual APIAs an example for this article, a single customer API will serve as the starting point for exhaustive testing.
Figure 3 shows that there are 12 possible combinations through the logic gates. There are 3 valid and 9 invalid com-binations to be tested. This is a simplified version, and you would have far more logic around the pay-load validation; however, creating a completely automated test harness for this API would be straight-forward as is set out next.
Incorporating virtualization in API testingIn order to test this API, it is likely that you will need to use virtualization. In this simplified example, some of the negative tests (starting InvalidParm) can be set up by forcing deliberately bad values into the API parameters (name=?); however, for the other negative tests, you will have to simulate either a slow response, a fault or a database update failure. In order to test these combinations as part of the master flow, you would need to change your API to a virtualized version. The test case design logic housed in the example flowchart has therefore been adjusted to allocate virtual endpoints within the tests. In Figure 4, the subflows (orange) set out the APIs which will be virtualized, as reflected in indented steps in the test case shown in Figure 5.
Moving from exhaustively testing an individual API to chain, integration, multi-version, failure and order sensitive API testingOnce test cases have been defined to exhaustively test an individual API, you can use model-based techniques to con-nect components into further types of tests. Effective testing of chains of API
Failure testing – Testing failure should be calls or integration testing requires that you fully understand each API, its causes and effects, as well as its data dependencies. You can additionally combine the version compatibility matrix (as defined in Figure 1), while selectively choosing to virtualize certain decisions to merge functional and logical testing in one model, as shown in Figure 6.
Auto generate scripts, data and virtual endpointsSo far, we have created a model which incorporates the various APIs to be tested, and their relationships, while also speci-fying what needs to be virtualized. Test data and automation code snippets can further be overlaid, but the question remains of how to actually generate the tests systematically.
As the flowchart model is mathematically precise, paths can be generated auto-matically from it. These paths are equi-valent to test cases which can then be converted into automated tests, as shown in Figure 7. If automated test scripts and
Huw Price is vice president, application delivery & global QA strategist, at CA Technologies. Huw’s 30-year career in testing has given him a deep understanding of testing as a science and how it can be applied to modern organizations so that they can meet today’s challenges.
test data have been overlaid into the flowchart itself prior to the creation of the optimized paths, then the manual effort of generating automated tests can be substantially reduced.
Combining the automated test design and virtual responses with version compatibility allows you to define combinations of tests in a much more structured way. Faced with a large number of possible combinations, you have a clear measure of coverage and risk with which to prioritize tests.
SummaryThe ability to selectively stabilize some of the API calls in a controlled way provides a much more structured and rigorous approach to testing APIs. The use of model-based testing tech-niques, whereby each decision gate is tested with the minimum number of test cases in conjunction with a risk-based approach, further means that API testing can be optimized to more rigorously test a given set of APIs, even when faced with huge complexity
21PT - June 2016 - professionaltester.com
Test strategy
Figure 7: The flow (top) shows the subflow models of APIs connected into a chain; the scripts (bottom) are created as an output of the test case design tool.
by Bogdan Bereza
Testing Talk
PT - June 2016 - professionaltester.com 22
Interview
Anthea Whelan talks to Derk-Jan de Grood, a thought leader in testing circles, as well as an agile transition coach. A senior consultant for Valori, Derk-Jan has won the European Excellence award, has published several successful books and frequently presents keynote speeches at a variety of agile testing conferences. You can also see him during the upcoming Test Automation Day in Rotterdam on 23 June.
Since a lot of projects get into stormy water, the stakeholders very often have big concerns and would really love to have insight in progress, quality and dependencies.
You mention that you enjoy studying trends and how they affect us. Has anything caught your eye lately?
Does anything still surprise you?
I did a workshop recently about sharpening the profession and thinking about what sort of skills are required to become a tester and I was surprised by how many people did not even know about agile. There are
I try to collect as much information as possible from a variety of sources. I listen to my peers at conferences to see what is happening and get to visit a great many organisations. I learn more and more where people find their troubles lie, where their struggles are, and from this mix I try to find new solutions. I get new ideas and try them out. You can get a lot of new ideas from trend-watching, but the real challenge is to translate that into benefit for our customers.
There is a lot going on in IT right now. But I tend to focus a lot on agile these days. Within agile the daily focus shifts from working in silos to collaboration, from execution to coaching, from preparing to doing. But the test fundamentals remain in place. I am current preparing my keynote for the September SIGIST in London, on this topic. It is really interesting. When we apply the agile principles, the test activities may seem the same, but the motivation to do so might change. The scope has widened beyond just finding defects. It's about contributing to business value.
This aligns with a lot of stuff that I have been working with for a long time: the value-driven aspects of IT. Testers should be able to say: “I want to do all the tests you want me to do, but first, why do you want me to do these tests? What are you trying to achieve?”
Interview
without automation. Switching to automation can initially be expensive.
To me testing is more than just bug-hunting – just making the code a little better. Testing is about aligning with stakeholder needs and addressing their concerns: that adds value, especially in the perception of the stakeholder. Since a lot of projects get into stormy water, the stakeholders very often have big concerns and would really love to have insight in progress, quality and dependencies.
Testing is a means to obtain that crucial information. By making this clear, we get a lot of commitment to do our testing. Whether these activities are focused on bug-hunting or automating tests, the stakeholders provide support since they understand it delivers the information they so desperately need.
Ask whose is the final decision? Who decides whether they accept this? What must be done so that it will be accepted? What exactly are the acceptance criteria? Testing is a means to obtain that crucial information. Once we have made that clear, whether these activities are focused on bug hunting or automating tests, stakeholders will provide support as our activities will deliver the information they so desperately need
You often talk about showing the added value of testing. Why is that important?
still testers – and I am surprised by how many there are – who grew up in the fashion of applying the methods, plus the odd trick or two and then you are done. That no longer works, but there seems to be a group of testers that cling to old values.
Perhaps because of the Millennium bug? In the 1990s, a lot of testers were needed for things like the Y2K bug and the Euro introduction. Many were hired, without any IT background, to manually test these systems. They became very good at functional testing, but perhaps they may have been a little afraid of technique.
For years we'd reason that automation is good, but that we needed to ensure that all the boundary conditions were met before we could start automating - otherwise we will just end up with a mess.
I see a lot of organizations still with a lot of legacy issues – in both equipment and techniques. People have difficulties adopting technologies and tools; in most organizations, it's easier to hire a person than to buy a tool.
Within testing circles, we are asking more frequently if testers really need to be programmers. In the conferences I've attended, the audience usually seems quite evenly split. Meanwhile, the tools have become better – the need to auto-mate is greater, and we have changed our major development methods, so that, in turn, creates the need to automate. A lot of managers aim for dev-ops, continuous integration and deployment – you have to have your house in order if you want to do these things. It's very difficult to have, without automation. So the setting has changed and automation is part of agile now; part of getting your development processes in order.
Business managers have to change their way of thinking. They just want good quality solutions, quickly, but this is not possible
Why do you feel that automated testing is still quite slow to be adopted?
Find out more about Derk-Jan de Grood at: djdegrood.wordpress.com/
23PT - June 2016 - professionaltester.com