TEST Magazine - June-July 2009

52
Inside: Testing evolution | 7 deadly automation sins | Introducing outsourcing Anne-Marie Charrett on meeting the needs of start-ups IN TOUCH WITH TECHNOLOGY T HE E UROPEAN S OFTWARE T ESTER Supported by Volume 1: Issue 2: June 2009 Maverick tester

description

The June-July 2009 issue of TEST Magazine

Transcript of TEST Magazine - June-July 2009

Page 1: TEST Magazine - June-July 2009

Inside: Testing evolution | 7 deadly automation sins | Introducing outsourcing

Anne-Marie Charrett on meeting the needs of start-ups

I N T O U C H W I T H T E C H N O L O G Y

T H E E U R O P E A N S O F T W A R E T E S T E R

Supported by

Volume 1: Issue 2: June 2009

Maverick tester

T.E.S.T

TH

E E

UR

OP

EA

N S

OF

TW

AR

E T

ES

TE

RV

OL

UM

E 1

: IS

SU

E 2

: JU

NE

20

09

Page 2: TEST Magazine - June-July 2009

T.E.S.T | June 09

Full-Time Quality Assurance Manager—Immediate Opening

Tes

tTra

ck® P

ro

Test

Trac

k® T

CM

Test

Trac

k® S

tudi

o Su

rrou

nd S

CM®

Seap

ine

CM®

QA

Wiz

ard®

Pro

Te

stTr

ack®

TCM

Iss

ue M

anag

emen

t Te

st C

ase

Man

agem

ent

Test

Pla

nnin

g &

Trac

king

Co

nfigu

ratio

n M

anag

emen

t Ch

ange

Man

agem

ent

Auto

mat

ed Te

stin

g Te

st C

ase

Man

agem

ent

www.seapine.com/testmag Satisfy your quality obsession.[ [

© 2009 Seapine Software, Inc. All rights reserved.

Don’t work yourself to death. Use TestTrack® TCM to manage your testing effort.

TestTrack TCM

Page 3: TEST Magazine - June-July 2009

T.E.S.T | June 09 June 09 | T.E.S.T

A year on from the Heathrow Terminal 5 debacle, it seems that that the problems experienced by the state-

of-the-art facility on its opening have now provided the software testing community with a touch-stone and a convincing sales story with which to put the case for more and more thorough software testing to the wider world.

At the time, British Airways' chief executive, Willie Walsh, told MPs that a lack of IT testing had led in part to the disastrous opening of the new terminal at one of the world’s busiest airports. Walsh told the UK Government’s Transport Select Committee that was investigating the failures that the building of the terminal was not finished on time, and IT testing and staff training were compromised as a result.

When the terminal opened on 27 March last year, it immediately ran into a slew of problems that Walsh said “cascaded” leading to losses of £16m for BA in the first five days of operation. Clearly the financial case for thorough software testing couldn’t be made more clearly!

And indeed, software problems were reported to be the main cause of the

airline's problems. Walsh confirmed that the problem with the baggage system was a software filter that was mistakenly left in place after the system – designed by BAA – went live.

Walsh said the filter was used during the testing period to ensure the messages generated were restricted to the BAA operation, and were not sent out further than that. But because it remained in place after the terminal opened, it interfered with the messages coming into the system, meaning the system could not recognise a number of bags.

In addition to these problems, there was also an issue with servers not being able to cope with the volume of baggage being handled. Again, surely more thorough testing simulations would have flagged this insipient catastrophe before it had developed.

It is a shame that often something has to go catastrophically wrong before people sit up and take notice, put the correct infrastructure in place and finally stop cutting corners, but I guess it’s just human nature to take the path of least resistance.

In contrast, inside this issue you will find a feature on the importance of testing in the potentially life or death applications used in the medical device industry. Clearly, bugs and glitches in the software that runs heart monitors, pace-makers and other critical, life-saving devices are unacceptable. If the approach taken in this area had been applied at T5 I’d like to think things would have been markedly different.

Until next issue...

Matt Bailey, Editor

Leader | 1

It’s all about quality

When the terminal opened on 27 March last year, it immediately

ran into a slew of problems that Walsh said “cascaded” leading to

losses of £16m for BA in the fi rst fi ve days of operation. Clearly

the fi nancial case for thorough software testing couldn’t be

made more clearly! Matt Bailey, Editor

Editor Matthew [email protected] Tel: +44 (0)1293 934464

To advertise contact:Grant [email protected]: +44(0)1293 934461

Production & DesignDean Cook [email protected] Barrington [email protected]

Editorial & Advertising Enquiries 31 Media, Crawley Business Centre, Stephenson Way, Crawley, West Sussex, RH10 1TNTel: +44 (0) 870 863 6930Fax: +44 (0) 870 085 8837Email: [email protected] Web: www.testmagazine.co.uk

Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA

© 2009 31 Media Limited. All rights reserved.

T.E.S.T Magazine is edited, designed, and published by 31 Media Limited. No part of T.E.S.T Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available.

Opinions expressed in this journal do not necessarily reflect those of the editor or T.E.S.T Magazine or its publisher, 31 Media Limited.

ISSN 2040-0160

Published by:

T H E E U R O P E A N S O F T W A R E T E S T E R

Page 4: TEST Magazine - June-July 2009

T.E.S.T | June 09

Simply visit www.testmagazine.co.uk/Subscribe.html Or email [email protected]

The European Software Tester

SUBSCRIBE TO T.E.S.T.

Published by 31 Media Ltd

www.31media.co.uk

In Touch With Technology

Telephone: +44 (0) 870 863 6930

Facsimile: +44 (0) 870 085 8837

Email: [email protected]

Website: www.31media.co.uk

*Please note that subscription rates vary depending on geographical location

Inside: Delivering with agile; The future of software testing; Anarchy in the QAGetting it right with Risk-based testing Handling the risk

I N T O U C H W I T H T E C H N O L O G Y

T H E E U R O P E A N S O F T W A R E T E S T E R

Sponsored by

Inside: Delivering with agile; The future of software testing; Anarchy in the QA

Inside: Delivering with agile; The future of software testing; Anarchy in the QAGetting it right with Risk-based testing

Getting it right with Risk-based testingHandling the riskHandling the risk

I N T O U C H W I T H T E C H N O L O G Y

E S T E R

Page 5: TEST Magazine - June-July 2009

T.E.S.T | June 09 March 09 | T.E.S.T

Contents | 3

1 Leader column Editor Matt Bailey learns lessons from the Terminal 5 debacle.

4 Cover story – Maverick tester Anne-Marie Charrett on the challenges meeting the needs of start-ups.

8 It's testing Jim, but not as we know it! The evolution of software testing in an era where off-shoring and automation are

becoming essential parts of the testing lifecycle and rapid technology improvements

are constantly invoking change.

12 The seven deadly sins of software test automation Colin Armitage invokes Dante to uncover some serious home truths about automation.

18 Pen testing in the Web 2.0 era Although testing is gradually being recognised as a profession and a natural career

path within the IT industry, not all know the real art of testing. Senior tester

Don Mills reports.

22 For security, call the fuzz Fuzzing has proven to be a low cost and very effective technique for software testing.

Rodrigo Marcos explains why there’s a buzz about fuzz.

26 An introduction to outsourcing As companies have come to realise the importance of testing, they have also

simultaneously discovered the benefi ts of offshoring. Graham Smith explores the

benefi ts of outsourcing.

30 Doctoring the code Exploring how software testing can help save lives while reducing costs in the

rapidly changing medical device industry.

34 Don’t be a testing tool junkie Up to 70 percent of the effort for a performance test is spent on scripting. It’s time

to look at alternatives that allow testers to focus on testing and analysing results,

and stop becoming tool junkies.

38 Quality management and the urge to merge Dave Rigler explores some of the ways in which IT systems can be merged and how

good quality management can provide the required level of business confi dence.

43 T.E.S.T Directory

48 The Last Word – Rosie Sherry Rosie Sherry asks what are the implications of social Media,

collaboration and crowd sourcing.

CONTENTSJUN 09

22

38

4

48

SUBSCRIBE TO T.E.S.T.

Page 6: TEST Magazine - June-July 2009

T.E.S.T | June 09

4 | Test cover story

Dublin-based freelance software test consultant Anne-Marie Charrett explores the pros and cons of her specialist testing area, working with start-ups.

Maverick tester

As a freelance software test consultant, a good proportion of my work is with start-ups. I like it this

way. Working with start-ups is an exhilarating process. You work with great creative people who are driven, determined and incredibly passionate about their business. Challenges faced require innovative solutions driven by the need for a short time to market and limits on funding.

You need a certain mentality to work with start-ups and perhaps if you are comfortable in a controlled process-driven environment, this may not be an ideal job, however, if it appeals to the maverick in you then get ready for the ride of your life.

Benefits of working with start-ups Personally, I enjoy the freedom of working with a start-up provides. You get to work directly with all types of employees from the CEO to marketing and often the customer. You are in a position to influence areas of the business traditionally outside software testing jurisdiction and vice versa, they can provide invaluable input to your testing strategy.

Software testing differs with every company and offers variety of purpose and approach. There is also a wide range of understanding about software testing. Typically, those more knowledgeable on the subject, demand more of it. I suspect this is because with these start-ups the benefits of are better understood.

Gavin Killen, co-founder of DigiProve believes that software testing is integral to the success of a product. He explained: “There is no way if someone develops a product it is going to succeed without software testing. You think you’ve got it right, but you haven’t. We missed 25 big issues prior to our independent software testing. It was an invaluable exercise.”

Working with start-ups demands that you’re creative and innovative in order to get the work done. Constraints such as lack of resources, time and money demand creative solutions. Often equipment and tools are limited, and testers have to find new and ingenious ways of testing more with less.

In most cases, you need to be fairly technical as it will be up to you to setup the test environments. Of course, it goes without saying that

T.E.S.T | June 09

Page 7: TEST Magazine - June-July 2009

T.E.S.T | June 09 June 09 | T.E.S.T

you are the ‘test team’. You plan the testing, obtain the resources, create the tests, setup the test environment, create reports, and sign off the testing.

And don’t rely on having a test director to get you through the day. It’s simply not enough to suggest that test script management is a useful technique that will benefit the company. For a test tool to exist, you have to champion its cause, find an open source version of it, install and configure it, ready for everyone’s use. Naturally, it goes without saying you will have to maintain it.

Start-ups often lack an extensive, formalised documented process as it is often considered to be an expensive overhead. And in many cases, this is true. Start-ups thrive in a flexible dynamic environment and it can be hard to justify following a process that leads to extensive documentation. While some documentation is essential for structure, more focus is aimed at the tester’s ability to be able to comprehensively test an application in a cohesive and disciplined manner that does not involve large amounts of paperwork.

The downsideIt can be incredibly frustrating to work with start-ups and it’s not a job for the faint hearted. The pressure to deliver and believe that the software will work, often leads you to be the lone voice raising concerns on readiness. Because of this, you need a thick skin, be confident about your work, and be able to back up your opinions with real facts.

You also have to be prepared for the constant change happening around you. If you’re the type of tester to get frustrated when there are little or no requirements, or changing requirements, then this is not the field of work for you. Like it or not, requirements are constantly being altered, added, refined, taken away, and then put back again, and as a tester in a start-up, you will have to be prepared for this constant flux. You will have to find ways to work with the changes as they arrive and plan for constant change.

So, working with start-ups is a bit like a roller coaster ride, with extreme ups and downs and in the back of

your mind you wonder if you’re not just a little bit mad to love it so much.

The hard sellIt’s also hard work selling software testing to start-ups. A considerable amount of energy and time is spent in convincing ‘entrepreneurs’ of the need to test using independent experts instead of developers. As one start-up CEO explained: “I don’t need a software tester, I have developers, just don’t call them testers.”

Faced with this type of response, it’s easy to become negative about testing in start-ups and I suspect because it’s not a major revenue area, it doesn’t receive much bandwidth in larger testing agencies. But in turning a blind eye, are we in the software testing industry failing to meet the needs of start-ups?

The majority of start-ups are passionate about quality, but it needs to come in a cost effective and flexible manner. Peter Elger, CTO of Catch, uses an agile and test driven development (TDD) approach to software development and testing. As well as automated unit and acceptance testing using Selenium, they also perform exploratory testing and usability testing. For Catch, manual and automated software

Test cover story | 5

I enjoy the freedom of working with a start-up provides. You get to work directly with all types of employees from the CEO to marketing and often the customer. You are in a position to influence areas of the business traditionally outside software testing jurisdiction and vice versa, they can provide invaluable input to your testing strategy.

T.E.S.T | June 09

Page 8: TEST Magazine - June-July 2009

T.E.S.T | June 09

testing was not a choice, it was the natural way to provide confidence.

Though not all start-ups think like this. As an alternative to hiring a software tester, start-ups typically test in two stages. The Alpha Test Phase using an in house developer to test software followed by the Beta Test Phase, which involves select customers trialling the software prior to its formal release.

It’s a poor response to software testing, as most developers make bad testers. To put it baldly, developers like to create, testers like to destroy. Also, Beta Testing is not as cheap and easy as hoped by many start-ups. In fact, it’s costly, resource intensive and time consuming.

Of course as software testers we all know this. We also know the practical benefits our work brings to a company and that these benefits extend beyond finding bugs so that others won’t. For example, it supplies confidence and knowledge to both supplier and client and reduces the risk of product failure in the field. It improves the application’s usability and helps generate positive first impressions and these are essential as word of mouth is an effective marketing tool for start-ups.

In the world of start-ups, the benefits of testing are amplified because software failure has greater impact. Simply put, if the product fails, the company fails. This is not necessarily true for mature companies who may have more than one product to sell, and if necessary can call upon additional resources to fix problems.

The trouble is, software testers are not seen to be providing value to start-ups. As an industry more effort needs to be spent meeting start-ups’ needs and more time spent on promoting these benefits to them.

For example, there is a misconception that while the benefits of software testing are real, it brings with it a weighty, time-consuming process. What is gained in results is lost in time and resources. In fact, vast changes have been made in software testing in the last twenty years or so. Software development and software testing have changed enormously since the waterfall model was concocted in the sixties, seventies. Today, there are many ways that software testing can be executed that extend beyond the V-Model and a typical standardised software testing process. My question is, are start-ups aware of this?

Rapid responseRapid software testing (RST) as invented by James Bach (http://www.satisfice.com) is a technique that is well suited to working with start-ups. The technique is aimed at testing quickly and comprehensively. It does so by placing greater emphasis on testing the application as opposed to test documentation. This technique is context-driven, considers cost vs. value in all test activities and uses exploratory testing techniques. It relies on the tester’s intelligence to drive the testing through to its completion.

There are also a large number of excellent open source testing tools available to start-ups. These can

be appropriate for software test management or assisting in the automation of testing. This is great news for start-ups who work in a high-flux, short-time and low-cost environment. It means that there are now techniques out there that allow software testing to be effectively applied in a quick, meaningful and cost effective manner. It provides the benefits of risk reduction, and confidence building without requiring enormous amounts of documentation or a large budget

Perhaps it’s time to take a fresh look at the start-up industry and see how we as software testers can help drive their success. In doing so, we not only involve ourselves in a dynamic and intense environment, we are also ensuring the creation of software testing jobs in the future.

T.E.S.T | June 09

6 | Test cover story

Anne-Marie Charrett

Software tester

Testing Times

www.testingtimes.com.au

Anne-Marie Charrett is a professional software tester; she runs her own company Testing Times. An electronic engineer by trade, software testing chose her when in 1990 she started conformance testing against European standards. She was hooked and has been testing since then. Anne-Marie really enjoys working with innovative and creative people, which has led her to specialise in working for start-ups and incubators. Anne-Marie is a keen blogger and hosts her own blog http://mavericktester.com.

Page 9: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

Page 10: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

8 | Testing world

It's testing Jim, but not as we know it! Just as Star Trek has evolved considerably over the last 40 years, comparisons can be drawn with the evolution of software testing. In an era where off-shoring and automation are becoming essential parts of the testing lifecycle, and rapid technology improvements are constantly invoking change Gordon MacGregor, chief executive of SmarteSoft and Steven Christer global head of testing for Strategic Systems Solutions ask what implications does this have for software testing in the US and around the globe.

Back in 1966, Captain James T Kirk asked to be ‘beamed up’ for the first time amid a cardboard-looking set, basic

special effects and poor makeup. But 40 years later and the cardboard sets are replaced with CGI and the makeup and special effects are vastly improved. The improvement has been immense, and certain parallels can be drawn with the evolution of software testing over the last few decades.

Why was there a need for a change? In the 60s, to a large degree very limited unstructured testing took place within software companies and IT departments, predominately completed by the development teams. Testing was seen as a routine activity that anyone could complete, and it was usually assigned to junior developers as part of the rite of passage. This approach often provided poor quality, as countless tales of multimillion pound utility bills and other obvious errors showed. These types of major software failures resulted in software testing undergoing radical reviews.

As the use of computers became

more prevalent in both the business and the general population, consumers and companies started to demand better software quality. The ‘blue screen of death’ became unacceptable as the market demanded the same reliability from software that was expected of other consumer goods.

In the late 90s the millennium panic caused massive amounts of time and money to be expended on formal testing of legacy systems for millennium defects. This further re-enforced in the software industry the critical need for rigorous testing. The millennium changed the landscape of testing and can be identified as a point of emphasis that efficient and effective testing can provide real benefits to ROI.

Rigorous testing became an industry-wide must, with consumer software companies setting the standard for software quality. Mass distribution of consumer software meant that the support cost of software defects skyrocketed. Companies began to invest significantly in testing, and to look at alternatives methods for testing. This increase in investment and search for alternative methods began the boom in off-shoring and

In the late 90s the millennium panic caused massive amounts of time and money to be expended on formal testing of legacy systems for millennium defects. This further re-enforced in the software industry the critical need for rigorous testing. The millennium changed the landscape of testing and can be identified as a point of emphasis that efficient and effective testing can provide real benefits to ROI.

Page 11: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Testing world | 9

accelerated the automation trend that had started in the mid-90s.

People, process and tools To satisfy the need for more rigorous and professional software testing, IT professionals started outsourcing testing to specialists. Consulting firms formed centres of excellence around software testing. At the same time, to provide enhanced test coverage, better schedule adherence, improved quality and reduced costs, the software industry embraced test automation.

The increase in off-shoring and automation led to the realisation that to secure maximum ROI the necessary structure and controls needed to be established or these changes would not realize their full potential. The introduction of defined processes such as CMMI, TPI and TMMI brought more structure and governance to the testing lifecycle. Formal qualifications from ISEB/ISTQB, around foundation, intermediate and practitioner levels have also brought more respect to the role that testing plays with the project life cycle.

Yet, although surveys today show that approximately three quarters of software development organisations

believe formal testing is an essential investment when deploying new technology, more than half admit that they don't have a consistent approach to testing. There is clearly a greater acceptance of the crucial role testing plays in IT development, but there is still a lack of commitment to the process, especially when it comes to actual investment. This lack of consistency and investment in testing and quality assurance is still costing organisations dearly.

In the coming years we will see further progression of the trend toward investment in testing and test automation, and with test management becoming ever more complex, this is already driving the software industry toward sophisticated software test organizations and tools, whether home-grown or outsourced.

The evolution of automation toolsAutomation provides an opportunity to reduce cost, improve time to market, stabilise schedule predictability, improve software quality, and enhance employee job satisfaction. With each new generation

Although surveys today show that approximately three quarters of software development organisations believe formal testing is an essential investment when deploying new technology, more than half admit that they don't have a consistent approach to testing. There is clearly a greater acceptance of the crucial role testing plays in IT development, but there is still a lack of commitment to the process, especially when it comes to actual investment. This lack of consistency and investment in testing and quality assurance is still costing organisations dearly.

of automation tools, these benefits are magnified.

The first generation of test automation tools were hand-crafted frameworks and programs written by software developers. This is a common approach even today for unit test. It is less common for system-level regression testing.

Hand-crafted test tools were supplanted by the second generation of tools for regression test automation, which provide scripting languages and infrastructure to make software test automation more flexible, faster and easier. Although this was a step forward, scripting is time-consuming and requires much maintenance when software under test changes, but as long as the target software is reasonably stable, it is a viable approach.

Second generation tools also offer ‘Record & Playback’ features which enable recording of software usage, and automated playback of whatever the user did when recording. For software that does not change often, Record & Playback is a viable approach for GUI-driven regression testing.

Second generation automation tools, widely available from numerous

Page 12: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

10 | Testing world

vendors, are excellent for regression testing of large, relatively stable software applications. Care must be taken with second generation tools to maintain documentation and enforce scripting standards, and if this is done the cost and time savings can be compelling. Most tools being used today for test automation are second generation tools.

Third generation tools have emerged recently, from smaller, more innovative firms, which do not require scripting skills, and can be maintained very efficiently and easily by relatively junior test engineers. The more advanced third generation tools also generate test case documentation automatically. This combination of ease of use and extreme efficiency leads to a very high ROI for waterfall development, and is perhaps the most viable approach to automation for agile development sprints.

With their extreme speed of implementation and maintenance, automated documentation generation, and ease of use advantages, third generation tools offer a compelling ROI for modern software development teams.

Offshore outsourcing Understaffed test teams that begin late in the development process are a common problem, leading to the teams being frequently rushed to complete the required tasks in a compressed period of time. Some companies look to implement an ad-hoc testing team of subject matter experts in the latter stages of the lifecycle utilising the developers in the hope of completing testing tasks in an unrealistic time-frame. This seldom leads to satisfactory results.

Outsourcing firms can help organisations meet tight deadlines by utilising their workforce of test specialists in affordable locations. But, many factors need to be considered when outsourcing; specialised skills, duration, confidentiality, a legal system that supports IP protection, and close management to name but a few. Although currently India has the majority of software test outsourcing, emerging countries like China, Singapore, and Eastern Europe are proving to be viable alternatives. China for example in 2004 produced 351,357 computer engineering graduates, more than the US and India combined

Outsourcing testing tasks can also provide many other benefits to companies seeking to reduce testing

costs, ensuring increased testing expertise and augmenting their test teams. Given the increasing pressure for software quality, many companies will resort to outsourcing testing in order to obtain a crucial competitive advantage. The trend of outsourcing is expected to continue to grow and transform the software testing efforts of many corporations, not just in the US but worldwide.

The American wayThe US has been at the forefront of software test improvements over the last few decades, Bach, Gilb and Myers were all leading experts and innovators in the field of software testing. A vast amount of innovation has been driven from the US with the European markets following the lead set by these American innovators and due to global corporations having standardised policies and procedures in place. Asia has then followed suit as they look to standardise their working practices with the European and US markets.

The ever increasing use of agile development methodology has been adopted with gusto in the US in the past five years and is driving rapid and wide-reaching change in the global software industry. This topic merits its own discussion, and has been extensively discussed in the technical press. Agile development has attributes which challenge the status quo in testing. For one, cycle times are much faster – a sprint of a few weeks does not leave much time for test development or automation. Thus, the need for third generation test tools and specialists who understand agile testing is strong and getting stronger.

Agile development also requires that test plans be co-developed with software design, ideally enabling testing to start before the software has been developed. Again, speed of implementation and maintenance, and availability of expertise, are key factors for effective test automation here. It's probably fair to say that agile as a discipline is not yet mature enough where test methodologies and tools are well established in advanced form, but it is clear that test methodologies developed for waterfall development are of limited utility in the agile world. This is perhaps the next great challenge for the software test industry, and a challenge which is being aggressively pursued in the US and Europe today.

Steve Christer Global head of testing Strategic Systems Solutions www.sssworldwide.com

Gordon MacGregorCEO SmarteSoft www.smartesoft.com

Page 13: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

VitAL Focus Group Ad FP 1108.indd 1 17/10/08 15:25:18

Page 14: TEST Magazine - June-July 2009

12 | Test automation

The seven deadly sins of software test automationDonning the robes of Dante, Colin Armitage, CEO of Original Software takes a light-hearted look at testing through the perspectives of the Divine Comedy to uncover some serious home truths about automation.

T.E.S.T | March 09

Page 15: TEST Magazine - June-July 2009

June 09 | T.E.S.T

prideTest automation | 13

For the past 15-plus years, organisations have turned to test automation as a way to improve efficiency in the

software Development life cycle. Yet despite heavy investment, software testing is still often the bottleneck in the delivery cycle. In a recent survey of CIOs, only six percent were totally happy with their automation. The scary thing is that this is tolerated - It’s the norm!

“Only in a world this s****y could you even try to say these were innocent people and keep a straight face. But that's the point. We see a deadly sin on every street corner, in every home, and we tolerate it. We tolerate it because it's common, it's trivial. We tolerate it morning, noon, and night. Well, not anymore. I'm setting the example.” John Doe (Kevin Spacey) in the film Se7en.

The Italian poet Dante Alighieri (1265-1321), wrote three epic poems (known collectively as the Divine Comedy) titled Inferno, Purgatorio, and Paradiso. In his book Inferno, Dante recounts the visions he has in a dream in which he enters and descends into hell. The sinners that Dante encounters in the Inferno are each punished in accordance with which of the seven deadly sins they were most guilty of while they were alive. Now, I can’t help but to liken this to our very own industry.

As Dante journeys through the Inferno he encounters sinners condemned to eternal damnation because of their actions or in some cases inaction while on earth. As I myself journey from prospect to prospect I often encounter the ‘sinners’ of test automation, whose projects are condemned to eternal damnation because of their actions, or, in many cases, the inaction that we un-earth.

One can gain a deeper understanding of Dante’s Inferno by studying the seven deadly sins which brought these souls to this miserable place. In this article I will explore each of the seven deadly sins as they relate to software test automation, instances we come across time and time again, and traps clients have often fallen into because of their earthly vices.

Pride / Vanity Usually considered the worst of the sins, pride is a feeling of superiority and an excessive belief in a person's own abilities; A desire to be more important or attractive to others, failing to give credit due to others, or excessive love of self. This sin often manifests itself in the wanton squandering of money and time on themselves without caring about others.

Picture the scene: The wool has been pulled over the eyes of Brimstone Business Application Co’s CIO. He did

March 09 | T.E.S.T

He’d fallen into the superiority trap of believing that the most expensive or most prevalent solution would always be the best, but now was beginning to realise that this particular technology was not actually compatible with his company’s needs. He’d brought this solution in, so his pride was unable to take failure. Instead he persevered until it was too late and placed unrealistic goals on his QA team, who were then forced to revert back to manual testing.

T.E.S.T | March 09

Page 16: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

sloth14 | Test automation

a deal with a big IT supplier to have their quality-centred products added to his order. At the time it seemed like a good idea – he saved the company money and had a nice round of golf rolled-in to boot. He’d fallen into the superiority trap of believing that the most expensive or most prevalent solution would always be the best, but now was beginning to realise that this particular technology was not actually compatible with his company’s needs. He’d brought this solution in, so his pride was unable to take failure. Instead he persevered until it was too late and placed unrealistic goals on his QA team, who were then forced to revert back to manual testing. As a result the project time-lines slipped, applications went out the door late and bug-ridden, which proved expensive in re-work costs and built up a huge stack of technical debt. His department was now damned to an eternity of fire-fighting the latest problems.

The punishment in Hell will be: to be broken on the wheel. Avoidance strategy: bigger isn’t always better. Look around when evaluating new solutions.

SlothSpiritual or actual apathy or laziness; an undue slowness to act; Sloth can also concern wasting due to lack of use or allowing entropy, expanding into almost any person, thing or skill that would require maintenance, refinement and/or support to continue to exist.

The IT director of Hades Hire Company wanted an easy life.

He shelled out for the best test automation solution money could buy (or so the sales exec said). Feeling pleased with himself he imagined pouring a large Louis XIV, lighting up a cigar, kicking back and watching as all of his testing nightmares melted away in a flash of computer wizardry and dreamy music.

Unfortunately this was far from the case. Two years and £1 million later, after a lengthy installation and script development programme his team were finally all set to start their first automation! The reality was that there was no computer wizardry, no dreamy music, nothing. It cost the company a small fortune, and needed continuous attention and constant looking after by a team of specialists to be effective. Even with all this TLC, it was still prone to sulk and become unproductive. Every time the application was updated, the specialists then had to re-visit the code and tweak and re-write parts of it so it would be able to continue showing off its silky skills.

The upshot of this was that for those areas of the application that were changing frequently, the test automation became very difficult to maintain. Testers with the necessary scripting skills were also very expensive to employ and with staffing cutbacks and no-end of changes to the application, his QA department kept resorting back to manual testing in order to cope with the demand. In short, his £1m investment slowly fell into disuse and was doomed to take its place amongst the graveyard of dusty shelf-ware.

The punishment in Hell will be: to

For those areas of the application that were changing frequently, the test automation became very difficult to maintain. Testers with the necessary scripting skills were also very expensive to employ and with staffing cutbacks and no-end of changes to the application, his QA department kept resorting back to manual testing in order to cope with the demand.

Page 17: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

lust

Test automation | 15

Of course, just like in every moral tale, his shortcuts came back to haunt him. He’d bypassed the database testing on their ecommerce application and the value field for the televisions that would normally retail for £1000 had been input at just £1.

June 09 | T.E.S.T

be thrown into snake pits. Avoidance strategy: beware of smoke & mirrors when evaluating solutions. Make sure they are easy to maintain.

LustLust is an inordinate craving or animal desire for; A lascivious passion or desire for something often to the point of assuming a self-indulgent, and sometimes violent character; It relates to almost anything that you may have a strong desire or drive for, such as the lust of battle or lust for power; Lust prevents clarity of thought and rational behaviour.

The QA team leader at Lucifer Banking Corp is fed up with making constant requests to her database administrator for slices of data to run their tests on. It’s a huge bottleneck

in the testing phase. She’s lusting after control. When the test database becomes corrupted, it can take up to ten hours for them to roll back the changes and reset the environment.

If only she was able to create her own test database with checkpoints she could revert back to time and time again, her life would be a hell of a lot easier. There’s a tight deadline on the latest software release and she knows that getting the DBA to reset her environment will hold the project up another 24hrs, so she decides to go ahead and test on (the corrupted) test database she has at hand. The regression test picks up a ton of defects that the test team go on to spend days investigating, only, far too late in the day, discovering that the errors were actually in the database

and not the latest software build. Because they have wasted time

chasing ghosts, real defects have managed to slip through the net in areas of the application they were unable to test. Because some exchange rate changes were not picked up, the application has had to be pulled and the team are destined to spend their weekend bug-fixing.

The punishment in Hell will be: to be smothered in fire and brimstone. Avoidance strategy: test data management is the bedrock of successful automation.

Greed The self-serving desire to acquire material wealth and possessions beyond the need of the individual, especially when this accumulation

Page 18: TEST Magazine - June-July 2009

T.E.S.T | June 09

greed16 | Test automation

Nobody thought about aligning IT with business requirements or how they would be able to conduct user acceptance testing when the selected solution required technical know-how and the ability to read code. As a result, his role has become little more than a glorified call centre operative, going back and forward between IT and the users, trying to work out how they came about particular defects and keep the project on track.

of possession denies others, greed is often manifested in various forms such as miserliness and unethical business practices; preoccupation with the acquisition and preservation of material things and possessions.

The IT manager at RiverStyx Retail had heard about a great job opening. He coveted the idea of the increased seniority, power and responsibility it would give him, and it was paying a lot more money too. There was just one hitch. He needed to prove success with some short, quick wins.

Unconcerned about corporate risk or the technical debt he would be leaving behind, he took numerous shortcuts in order to release more applications in a shorter time-frame, hoping to be an immediate hero and not be around by the time the repercussions were realised. He took that gamble but missed out on the new role. Of course, just like in every moral tale, his shortcuts came back to haunt him. He’d bypassed the database testing on their ecommerce application and the value field for the televisions that would normally retail for £1000 had been input at just £1.

Unfortunately, because there was no defect reporting system in place, consumer word of mouth acted much faster than the bug-fix, and, by the time the error was resolved, RiverStyx had sold their entire stock. Five thousand televisions at a loss of £999 made a significant dent in the company’s fortunes and in the ensuing witch hunt that followed, his bad practices and lack of planning

and strategy were exposed, warts and all, under stadium-strength floodlights. Needless to say, he lost his job and without a reference was condemned to join the ranks of the dole queue rather than the lofty heights of the elite that he had envisaged!

The punishment in Hell will be: to be boiled alive in oil. Avoidance strategy: Make solid test plans and stick to them. Shortcuts will always come back to bite you.

WrathIntense, fierce anger, rage or fury, usually on an epic scale and often leading to violence, wrath can be any action carried out in great anger, esp. for punishment or vengeance.

You’ve fought for the budget, parted with the money, spent two years implementing your whizzy new test automation solution and now you find that rather than freeing up resource and providing a rapid ROI, you actually need more staff to maintain all the script changes. It’s enough to make your blood boil! Well, that’s the situation the IT director of 666 Solutions found himself in. His ensuing hissy fit and bad temper made his capable team hand in their notices, doubling his pain and guaranteeing him a stomach ulcer to-boot.

The punishment in Hell will be: to be dismembered alive. Avoidance strategy: Know what you are buying into and do a thorough proof-of-concept.

wrath

Page 19: TEST Magazine - June-July 2009

T.E.S.T | June 09 June 09 | T.E.S.T

gluttonyenvy

Test automation | 17

Colin Armitage CEO Original Software www.origsoft.com

GluttonyDerived from the Latin gluttire, meaning to gulp down or swallow, gluttony is the over-indulgence and over-consumption of anything to the point of waste; often manifests itself as obesity, substance abuse or binge drinking.

Armageddon Autos had a small QA team of just two testers. To be honest they could have done with another couple of people in the team because the workload had increased significantly in the preceding few months. The senior tester was a bit of a script-monster, he loved nothing more than building scripts for their automated testing solution. In fact, he loved scripting so much, he did very little else. Eight hours a day, five days a week… “gotta build those scripts, gotta build those scripts”.

Love those scripts! When an application changed and needed re-testing he was there changing the scripts he’d just spent weeks building. “Gotta change those scripts, gotta change those scripts”. He loved scripting so much he actually never got any proper testing done. Time and time again, testing held up application delivery, projects were always late, and always went over budget. The problem was, he didn’t seem to mind, he was happy with his scripting solution.

“It will automate everything - just as soon as I finish these scripts” he kept saying, but the application was always changing faster than he could keep up. Maybe the QA team didn’t need that extra person after all, maybe they just needed their senior tester to get on and test rather than spend every waking minute writing those scripts. Maybe they didn’t have the ideal set up, but he loved that scripting solution because of his familiarly with it and because it gave him a purpose in life – to build scripts. The QA team was fated to continue going round in circles, being the slow cog holding up the rest of the machinery.

The punishment in Hell will be: to be force-fed rats, toads, and snakes. Avoidance strategy: Remember what you are trying to achieve. Don’t get hung up on the mechanics.

EnvyThis is the desire to possess what others have; Grieving spite and resentment of material objects or accomplishments of others; Those who commit the sin of envy resent that another person has something they

perceive themselves as lacking, and wish the other person to be deprived of it; Jealousy; covetousness.

It’s Friday evening and after a devilishly long day trying to co-ordinate user-acceptance testing, Azazel Application’s business analyst is relaxing in the pub, nursing a pint of Hellhound Hops ale and venting his frustration to his close friend, the QA manager of Dungeon Doughnuts.

His problem was that when IT chose to purchase their ALM solution, they didn’t engage him or consider the business needs of the solution. It was purchased purely for the needs of development and testing, nobody considered the bigger picture. Nobody thought about aligning IT with business requirements or how they would be able to conduct user acceptance testing when the selected solution required technical know-how and the ability to read code. As a result, his role has become little more than a glorified call centre operative, going back and forward between IT and the users, trying to work out how they came about particular defects and keep the project on track.

His friend tells him about the new solution they’ve just deployed at Dungeon Doughnuts, with a point and click interface, recording activity behind the scenes, with the ability to mark up defects for users to email straight to development. It is obviously brilliant and exactly what he would have liked to have seen implemented at Azazel Applications, but his friend is so smug about it, while he’s at the bar getting the next round in, he childishly flicks fag ash in his beer. His jealousy will eat him alive.

The punishment in Hell will be: to be submerged in freezing water. Avoidance strategy: Consider your IT/business alignment and make sure all requirements are in place from the offset.

So, revisiting the opening quote from Kevin Spacey, let’s shake off this apathy and stop tolerating these deadly sins just because they are commonplace. If you want to wear the halo of professionalism and not get burnt by bad practice, take a long hard look inside yourselves at what you are doing and why you are doing it. Don’t get caught out by letting arrogance, inertia, desire, materialism, self indulgence, resentment or temper get the better of your business decision making.

Page 20: TEST Magazine - June-July 2009

T.E.S.T | June 09

18 | Feature

T.E.S.T | June 09

Pen testing in the Web 2.0 era How is penetration testing coping with the brave new virtualised world of Web 2.0 with its new opportunities to breach security and compromise data? ProCheckUp director Richard Brain outlines the state of the art.

The increased market adoption of virtualised servers and interconnected web services (Web 2.0), introduces new

challenges when performing pen tests to uncover flaws and to create proof of concept attacks. Testing with no prior knowledge (black box) historically has provided a good foundation for a sound penetration test, though to be able now to detect and defeat the more prevalent advanced attacks, a more comprehensive system information and source code review is now required (white box testing).

VirtualisationServer virtualisation is rapidly becoming the standard in the server environment. Driven by the release of the Windows 2008 server and Red Hat enterprise 5.x. OS, and the desire to fully utilise the power of the latest Xeon chipsets, virtual host machines running these operating systems are easily able to support four to eight virtual hosts.

Worm/viruses historically spread over network shares, exploiting new security flaws found within machines. Virtual machine sprawl, which is the

uncontrolled creation and expansion of the number of virtual machines, can allow worms and viruses to spread throughout the data centre. As un-patched and insecurely configured hosted machines will be vulnerable to the same flaws as stand-alone operating systems and can become reservoirs of malicious agents if not properly managed.

Additionally as virtual machines have predictable hardware profiles, with similar virtual hardware shared between virtual machines, this similarity might be exploited by future

18 | Test security

Page 21: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Test security | 19

Virtual machine sprawl, which is the uncontrolled creation and expansion of the number of virtual machines, can allow worms and viruses to spread throughout the data centre. As un-patched and insecurely configured hosted machines will be vulnerable to the same flaws as stand-alone operating systems and can become reservoirs of malicious agents if not properly managed.

malware to spread more rapidly between machines as virtualisation becomes more widespread. BIOS level root kits are old news, and it should be expected in due course root kits which target virtual hardware like the keyboard controller will be released.

As the conflicker and sasser worms spread using hard drives, DVD and USB devices by using the auto-run feature, hosted virtual machines became infected from the host machine. The physical hosted drives when shared between hosted machines auto-ran and installed the worm on the hosted machine. Microsoft released a patch which effectively disabled auto run in Windows Server 2008 in Feb 2009.

Penetrating the virtual worldPenetration testing of virtual machines is no different from testing conventional hosts. Open ports are discovered and services running over those ports are tested for security flaws. Additional virtual support software like WMI management might also be found running on virtual machines. Interacting manually with individual virtual machines proves that the patching is recent and an updated virus system is running. An additional problem is identifying offline virtual machines, and backup images (stored offline/online) may not be sufficiently patched before being exposed to a dangerous environment like the Internet. The backup images themselves might be infected with malware, which needs to be considered if an organisation had recovered in the past from a malware infection.

Hosting servers that host and manage multiple virtual machines (four or more), require more in-depth and focused penetration testing to ensure that no security flaws exist which might adversely affect the dependent hosted machines. Simple denial of service attacks might occur by killing the hosting machine; or by privilege escalation.

Web 2.0In the past few years the Web has evolved from servers generating content, to a more responsive and dynamic mixture of client/server communications. At the same time JavaScript injection attacks have evolved from perceived session-stealing attacks using cross site scripting (XSS), to full exploitation frameworks which allow far more serious attacks.

Anton Rager demonstrated by XSS proxy the feasibility of JavaScript exploitation frameworks. Further frameworks have since been released like BeEf proxy, XSS shell and Backframe. These frameworks allow for more serious attacks to occur like intercepting key presses made within the victim’s browser, capturing browser requests made and injecting requests into the victims browser. Penetration testers now have to check for the more dangerous XSS attacks, in various forms (reflective and persistent) with common character encodings or browser-specific variations to bypass the different input/output XSS filters. Matters become more complex if programming languages or file types are used which themselves are subject to common XSS attacks due to common misconfiguration or old insecure versions.

Services like Twitter, EBay, YouTube and LinkedIn which allow users to upload and modify their own content pose a number of problems when performing penetration tests as penetration testers must test if malicious JavaScript can be directly uploaded to the website, and confirm that the websites input and output filters can cope with the various behavioural nuances when the various web browsers/engines render web pages.

For instance to bypass input/out filtering of the JavaScript static word (used to run code), it is common to add a new line (represented by \n) within the word making JavaScript

Page 22: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

20 | Test security

Penetration testers have to ensure that aggregator sites have processes and controls in place to ensure that un-trusted RSS feeds cannot be added, and that any code providing RSS feeds has sufficient filtering and malicious code detection so that un-patched subscriber machines do not execute any malicious embedded code. This is not straightforward, as some RSS feeds embed tags to make their content more interesting.

becoming java\nscript as Internet Explorer still processes words separated by line breaks. Any uploaded files should be treated with suspicion, due to various published exploits due to rendering errors when processing malicious submitted files. This might directly affect the website under test if it previews the file content, or more commonly only affects the end user machines. For instance with GIF files, the comment area might be used to alter the flash cross domain policy, or if a GIF file is combined with a JAR (Java Archive) which is termed as GIFAR to produce content which is then executed by the Java virtual machine. Other common GIF attacks attempt to create buffer overflows to run code on user machines, like the Mozilla Foundation GIF overflow or the Sun Java JRE GIF processing overflow vulnerabilities.

Submitted links can be used to attack flaws found in other sites, exploit flaws in software running on end user machines (eg Flash Player XSS) or directly attack other users of the website by a technique called cross site request forgery (CSRF). CSRF attacks typically occurs where the website uses long-lived persistent cookies for authenticating its users, for instance if a website user visits a malicious submitted page which then submits a page request like delete user (normally via an image tag). The user’s browser recognising that the page has associated persistent cookies submits the authentication cookie along with the submitted request,

and the site carries out the deletion request believing it was submitted by the user (as the authentication cookies were submitted). The SAMY worm spread across Myspace using an XSS attack by bypassing mechanisms to prevent CSRF attacks to perform a CSRF, and using string concatenation and character conversion to bypass XSS filters.

Many servers accept RSS (really simple syndication) news feeds which are forwarded to the servers’ subscribers; the subscribers’ web browser then renders the information contained within the RSS file. RSS files use the XML file standard to transmit information, a problem occurs when an attacker is able to submit a malicious RSS file. In such an instance, the attacker might be able to perform a XXE (XML external entity) attack to read system files and perform other attacks on an RSS aggregator machine (news site), or exploit an XML parser security weakness within the subscriber’s web browsers eventually to run system commands on subscriber machines. A recent example was the CVE-2009-0137 Safari RSS attack when a maliciously-crafted news feed was potentially able to execute code on the client.

Penetration testers have to ensure that aggregator sites have processes and controls in place to ensure that un-trusted RSS feeds cannot be added, and that any code providing RSS feeds has sufficient filtering and malicious code detection so that un-patched subscriber machines do not execute

Page 23: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Test security | 21

More time needs to be allocated to perform Web 2.0 penetration testing, particularly with penetration testing companies operating in an increasingly competitive environment, with market demand that web application tests to be performed to a budget with a year-on-year reduction of costs despite the inherent need to spend more time.

Richard BrainDirector ProCheckUpwww.procheckup.com

any malicious embedded code. This is not straightforward, as some RSS feeds embed tags to make their content more interesting.

Paying upService providers like Paypal, Ebay and Amazon, remove the need to process card payments and run complex e-commerce environments from their users. The interlinking of these different services ensures that vulnerabilities in providers will affect their users; it is becoming common when performing a penetration test that the found flaw is ‘downstream’ from the site under test. There are also issues with data integrity, as the data is now distributed and shared to and from the different service providers (data might be lost, intercepted etc). The website under test might submit customer data to a service provider, by using a published API provided by the service provider. This historic/current API code might contain programming flaws, which allow other registered parties to retrieve customer details or to interfere with the processing of orders by exploiting the flaws within the API.

AJAX and asynchronous JavaScript have been adopted to speed up data transfer to the client by just sending and displaying the information which has changed (instead of resending a page). The increasing widespread adoption of AJAX increases the time needed for penetration tests, as the sent information has an innumerable number of variations when inspecting

and trying to modify the data. The process of understanding the data is time-consuming as java libraries have to be inspected, with any security implications for the implementation understood and attacks created and simulated to carrying out an in-depth penetration test. Various frameworks exist like Google Web Toolkit, SAJAX, XAJAX or Microsoft.NET AJAX. Various formats are used to send data from custom streams, XML and JSON.

Facing the challengeI hope this article has given an insight into some of the current challenges facing penetration testers. More time needs to be allocated to perform Web 2.0 penetration testing, particularly with penetration testing companies operating in an increasingly competitive environment, with market demand that web application tests to be performed to a budget with a year-on-year reduction of costs despite the inherent need to spend more time. This regrettably all too brief overview of how penetration testers can find vulnerabilities, has hopefully aided administrators and information security managers in making their infrastructure more secure. Another concern for administrators and ISMs is the effectiveness of traditional IDS/IPS, and application level firewalls in detecting Web 2.0 attacks as such devices have been used as the traditional sticking plaster for insecure applications in the past.

Page 24: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

22 | Test security

For security, call the fuzz Fuzzing has proven to be a low cost and very effective technique for software testing. Rodrigo Marcos, principal consultant at Secforce explains why there’s a buzz about fuzz.

Page 25: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Test security | 23

Although fuzzing was once a technique exclusively used by security researchers, it is now becoming a vital part of company’s software development lifecycle. Microsoft has adopted fuzzing in its Trustworthy Computing Security Development Lifecycle[3] (SDLC) stating: “Apply security-testing tools including fuzzing tools. Fuzzing supplies structured but invalid inputs to software application programming interfaces (APIs) and network interfaces so as to maximize the likelihood of detecting errors that may lead to software vulnerabilities.”

The fuzzing technique applied to software security is not a new concept; it has been around for many years. The

earliest reference dates back to 1988 by Professor Barton Miller in his Advance Operating System class [1]. The approach taken by Miller was not focussed on security but in generic software robustness testing.

Only in the last five years however has fuzzing become a mainstream technique; not only with the appearance of a wide number of commercial and open source fuzzing solutions, but also IT security consultancies offering services to develop bespoke fuzzers for specific scenarios. The main reason for this growth is that fuzzing has proved to be a low cost and very effective technique for software testing.

What is fuzzing?Fuzzing is a technique used for software testing assurance. In its simplest form a piece of software receives input data, processes it and generates an output. Software data input can occur in several ways, including:- File- User entry- Network protocol- API

The input data generally follows an agreed format. For example, when you open Microsoft Word, the application expects a correctly formatted .doc document. Similarly, when you visit a website and load a web page, your web browser expects a correctly formatted HTML page. Or when you deploy an FTP server, the application expects

an FTP client to send requests to it in accordance to the FTP protocol.

Fuzzing is a technique whereby an application is fed malformed input data with the aim to detect anomalies in the processing of unexpected data entry. Although this process can be applied to a number of software assurance fields, it is most widely used with security vulnerability discovery purposes. The process is normally approached from a black box perspective and it is done automatically in order to cover large amounts of input primitives.

In an ideal world with no time and resource constrains it would be possible to achieve full code coverage by generating every single potential input that the application might receive. However, this is not only unfeasible but also unnecessary in most scenarios. For example, if a piece of software has been designed to receive a buffer of 256 characters, it is very unlikely that sending strings with a length between one and 255 would trigger an anomaly. However, null strings and buffers larger than 256 might lead to some kind of memory corruption vulnerability. The common approach for input generation is to focus on boundaries of the specific data type.

Mutation-based vs generation-based fuzzersAll fuzzers need to generate input and pass it on to the software that needs to be tested. There are two ways of generating the input data:

Mutation-based input: This approach takes a valid input as a baseline and generates variations of the

Page 26: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

24 | Test security

The growth of resources on fuzzing is increasing the adoption of this technique by QA teams. Nowadays there is a greater understanding of this technique which has proved very efficient in discovering security issues that could have been very difficult to identify with white box approaches such as source code reviews.

baseline. For example, a file fuzzer may take a valid .pdf file and mutate it, automatically generating a large number of modified versions of the file which will in turn be opened by the .pdf reader. A network protocol fuzzer may start from a known valid TCP request, modify this request in various ways and send it to the server.

Depending on the design of the fuzzer the mutation can simply be random or can implement some level of intelligence.

Generally this approach is simple and fast. Moreover, mutation-based fuzzers can usually be reused in many different scenarios. For example, it should be very simple to reuse a mutation-based .pdf fuzzer to any other file format. However these types of fuzzer only test variations of a known example and therefore the code coverage tends to be very limited as they only provide input for a small range of all the potential input.

Generation-based input: In this approach the entry data is generated from scratch, based on the specification of the file type, network protocol, API, etc that is going to be tested. For example a HTTP protocol fuzzer may be designed based on the Hypertext Transfer Protocol [2]. It would cover all the possible requests, methods and parameters.

This approach requires a level of understanding of the software that is going to be tested. The implementation of this ntype of fuzzer

is more complex as it involves a full understanding of all the potential input and the integration of this understanding into the fuzzing engine.

Generation-based fuzzers can’t be easily reused as they are focussed on a specific target, however they are usually more thorough in the testing and the code coverage is better, therefore leading to better results.

In practiceConsider the following typical example. We are testing a piece of software which receives a request from a client. The network protocol states that the message sent to the server needs to be in the following format: - Byte 1: Type of message- Byte 2: Length of the message- Buffer of the length specified in Byte 2

Depending on the message type, the server would execute a different action. A valid example request sent to the server would have a format like:

The example above is a request with type 1 and the text “This is a test” (14 characters)

Now consider the following server implementation where there is vulnerability caused by an insecure memory allocation when the type of the message equals three:

1 14 This is a test

Page 27: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Test security | 25

Rodrigo MarcosPrincipal consultantSecforcewww.secforce.co.uk

In a truly random mutation-based input generation the fuzzer would only target the vulnerable branch of code 1/256 times as the Type byte is mutated. A generation-based approach would implement test cases for each type of message and it would certainly reach the vulnerable branch of code. It is difficult to say beforehand which approach is going to be better however generation-based fuzzers provide more completeness and integrate better into methodical testing practices.

Fuzzing as part of your SDLCAlthough fuzzing was once a technique exclusively used by security researchers, it is now becoming a vital part of company’s software development lifecycle. Microsoft has adopted fuzzing in its Trustworthy Computing Security Development Lifecycle[3] (SDLC) stating: “Apply security-testing tools including fuzzing tools. Fuzzing supplies structured but invalid inputs to software application

programming interfaces (APIs) and network interfaces so as to maximize the likelihood of detecting errors that may lead to software vulnerabilities.” The discovery of the ANI vulnerability[4] in March 2007 had a positive impact on Microsoft’s adoption of fuzzing and is continuing until today in areas such as the Microsoft Office suite file format parsing.

The growth of resources on fuzzing is increasing the adoption of this technique by QA teams. Nowadays there is a greater understanding of this technique which has proved very efficient in discovering security issues that could have been very difficult to identify with white box approaches such as source code reviews.

Here to stayFuzzing is here to stay. It is a technique that has helped researchers, developers and testers to assess the security and robustness of software. It can be applied to any piece of software which accepts some form of data, regardless of its source. It can be a file, a media stream, a network request, an API function, etc.

Fuzzing is a low cost solution compared to other software testing techniques such as source code reviews and can be integrated into the software development lifecycle. This process is simpler than before as there has been an interesting growth in the number of resources on fuzzing and IT consultancies with understanding of this technique.

Notes [1] http://pages.cs.wisc.edu/~bart/fuzz/CS736-Projects-f1988.pdf

[2] http://www.w3.org/Protocols/rfc2616/rfc2616.html

[3] http://msdn.microsoft.com/en-us/library/ms995349.aspx

[4] http://www.microsoft.com/technet/security/bulletin/ms07-017.mspx

Input Entry

Read request type

Type = 1 Type = 2 Type = 3

[Secure code] [Secure code][Insecure memory

allocation] insecure strcpy() call

Page 28: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

26 | Outsourcing Testing

As companies have come to realise the importance of testing, they have also simultaneously discovered the benefits of offshoring it. Graham Smith, head of test consultancy, Europe at Cognizant Technology Solutions explores the benefits of outsourcing.

An introduction to outsourcing

We are in dynamic times where the structure of the economy at large, and businesses

in particular, is undergoing change. The role of IT as a business enabler is now even more prominent – and as a result, business, IT and QA are no longer seen as discrete elements but complementary in building a competitive advantage for business.

Software testing has evolved over the years from a ‘necessary evil’ to a ‘value enhancer’. Organisations now look to testing to provide them with the necessary confidence to power ahead in their business initiatives of integrating disparate platforms, reducing time to market with accelerated development

methodologies or improving the customer experience by enhancing their customer-facing applications.

The rise of testingSoftware testing’s emergence as a separate vertical, rather than a part of in-house activity, is a result of five key factors. First, with a large number of applications moving online, a robust testing infrastructure has become imperative for handling huge volumes of transactions in real-time. Secondly, the increasing number of mission-critical applications on the horizon need to be high-performance and have high-availability. As such, testing them for performance and robustness is critical, as many of them are becoming ‘self-service’ applications.

Organisations now look to testing to provide them with the necessary confidence to power ahead in their business initiatives of integrating disparate platforms, reducing time to market with accelerated development methodologies or improving the customer experience by enhancing their customer-facing applications.

Page 29: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

Outsourcing Testing | 27

March 09 | T.E.S.T

Thirdly, with several testing tools available in the marketplace, the ability to provide fully-integrated testing services in several different environments is now possible. Fourthly, many product companies are adopting independent testing, verification and validation to reduce the cycle-time for deployment of their products. Launching a well-tested product early gives product companies a unique advantage of gaining mindshare and market share early on.

Finally, the increasing maturity of the offshore software services sector has contributed to the rapid growth of the testing services market. The scarce availability of local skilled testing professionals and domain experts to support testing has also supported this offshore growth. So, as companies have come to realise the importance of testing, they have also simultaneously discovered the benefits of offshoring it.

Why outsource testing?Software testing involves the validation of applications to ensure that the user experience is in keeping with the specifications. Black box testing, which accounts for most of the outsourced testing, is performed from an end-user’s or a business user’s perspective, rather than from a code perspective.

Traditionally, testing services were offered in an integrated fashion, bundling it with application development and application maintenance services. Today, outsourcing companies offer testing services on a standalone proposition, focusing on supporting enterprises’ and product companies’ testing needs through independent verification and validation (IVV) of their software.

In such an outsourced, IVV testing model, the testing team is physically separated and given full independence to report the test results to the customers. This is underpinned by an effective engagement model where the testing team works closely with the client’s Quality Assurance organisation. More importantly, the testing team directly reports into the COO or the CEO and not into the project managers or business unit heads, thereby giving them the independence and objectivity to deliver value.

Companies choose to outsource testing for several reasons. The availability of a large, global talent pool helps to increase the capacity of the IT teams at an optimal cost. It also helps to manage resource requirements based on need, especially important when sudden changes in demand are anticipated.

Secondly, testing service providers invest significantly in various avenues of testing such as infrastructure, processes, people, tools, research, training and development. This helps their clients obtain higher value for the cost incurred, thereby maximising the return on outsourcing in terms of reduced cost, improved quality and increased speed of testing. Outsourcing companies can also deliver continuous process improvements, resulting in even greater productivity and cost efficiencies. The independent validation a third party provides can also boost confidence when it comes to deciding when a new application should go live. Finally, the detailed metrics that outsourcing partners can capture and report also help a business to more effectively evaluate the total cost of testing work.

There are many reasons for a company to consider outsourcing its software testing, the most basic being around cost and quality issues. Companies who are experiencing quality issues as a result of insufficient or ineffective testing processes, or those whose costs are swelling rapidly due to spending too much effort on testing (often due to using contractors, business analysts or SMEs to perform testing) can benefit from taking the outsourcing option.

June 09 | T.E.S.T

Page 30: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

28 | Outsourcing Testing

In essence, outsourced testing models provide the benefits expected from any IT outsourcing engagement, such as access to a large pool of dedicated professionals with a wide range of skills, and increased efficiencies derived from the teams’ deep expertise and experience.

Who should outsource?There are many reasons for a company to consider outsourcing its software testing, the most basic being around cost and quality issues. Companies who are experiencing quality issues as a result of insufficient or ineffective testing processes, or those whose costs are swelling rapidly due to spending too much effort on testing (often due to using contractors, business analysts or SMEs to perform testing) can benefit from taking the outsourcing option.

Organisations that want to experiment with outsourcing are also candidates, with testing work typically seen as an accessible first step because of the high cost-savings and relatively low risk involved. There are also companies whose customers have large integrated application environments requiring regular maintenance in the form of bug fixes and enhancements, or enterprise implementations which may require support after the initial implementation and stabilisation. In these cases, a contractor can provide ongoing support more quickly and cost-effectively.

Then there are organisations planning a change in technology or a change in business processes. In this event, outsourcing everyday business, such as testing, frees up critical resources and subject matter experts that are needed for re-aligning with new processes and systems.

Finally, there are organisations looking to optimise cost and improve coverage through automation. For these types of company, building a test automation team internally may be a costlier option than choosing to work with an outsourcing partner.

The future of testingIT organisations today have started to evaluate ways and means of

optimising testing costs. Large organisations are creating Testing Centres of Excellence to consolidate testing teams, thereby optimising the cost of testing. Meanwhile, smaller organisations are inventing newer ways to reduce cost, increase speed and improve quality. A few of these organisations are adopting agile methodology as opposed to a traditional, sequential waterfall model. The QA teams have started research on these techniques, such as model-based testing, test-driven development and virtualisation.

With more businesses opting to increase their software’s user reach by moving towards e-commerce and mobile commerce, we are witnessing a trend of offerings such as mobile testing and infrastructure testing. These include sub-sects such as mobile application testing, compatibility testing, security and usability testing. We are also witnessing a trend towards increased automation and reduced manual intervention in testing. In due course, testing will progress to encompass and provide testing-allied services, such as environment management. In addition, testing teams will graduate from being gatekeepers for quality to being accountable for quality by transforming from quality control

functions to quality management roles.

Recent developments in technology, such as SOA and virtualisation, are also expected to have a significant impact on testing - especially on the methodology front - and will mandate a fundamental change in the mindset of a testing professional. For example, SOA offers a way to more flexibly meet requirements by aligning technology with business needs. However, this model also makes software more complex and interconnected, and therefore more difficult to test. To increase business agility, while reducing the risks of change and complexity in software, the testing team needs to work alongside development and business teams to ensure quality throughout the design, development, and change lifecycles of SOA software. With no proven tools or frameworks currently available, QA teams continue experimenting with ways to address the SOA testing challenge.

Virtualisation techniques meanwhile promise faster testing and a significant reduction in the operating cost, but factors such as low success rates and the amount of upfront investment required hinder the adoption of virtualisation techniques.

Aligning to the businessSoftware applications must keep pace with growing business demands. Business users are increasingly impatient with IT systems that fail to solve real business problems or don't perform as specified. So as competition between vendors increases and as applications become more demanding in terms of data handling or of greater importance to a business’ bottom line, the market for outsourcing testing will grow. Outsourcing testing gives companies the ability to reduce testing costs while increasing its efficiency, taking advantage of a global pool of talent, greater expertise and continuous investment in new methodologies and process improvements. Ultimately this means that they can improve the quality of their software, reduce the time it takes to bring them to market and improve their brand image.

Graham SmithHead of test consultancy, Europe Cognizant Technology Solutions

www.cognizant.com

In due course, testing will progress to encompass and provide testing-allied services, such as environment management. In addition, testing teams will graduate from being gatekeepers for quality to being accountable for quality by transforming from quality control functions to quality management roles.

Page 31: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

Page 32: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

When lives are at stake, software bugs are not just a nuisance, they can have serious, and

even fatal, consequences. A glitch in a medical device can mean pacemakers don’t keep hearts beating, diabetics can’t check their insulin levels and a patient’s heart isn’t properly monitored. Not to mention the lost revenue and damaged brand reputation for the manufacturer.

Testing challenges in healthcareThe traditional approach to eliminating programming bugs is manual code review, often referred to as peer review. Despite its effectiveness, it’s probably the most maligned form of software verification since it often involves sitting in a room with colleagues and senior designers staring at the code that’s been written to ensure code and design integrity and security. Many organisations outsource this task, which is understandable, but

it is good practice to have some peer code reviews take place in-house on a regular basis. The danger is an over-reliance on this technique, especially as it is not scalable across large code bases and prone to human error.

However valuable this straightforward inspection process is, it has limitations. In practice, embedded applications in medical devices have now grown to a point where rigorous manual review of all possible paths and subsystems are unrealistic. For example, the operating system and applications software within a heart lung machine may consist of millions of lines of code, written by large teams of developers spread around the world. The opportunities for insidious bugs are clearly huge.

Finding a cureA key point to remember is that the earlier bugs are found, the faster and cheaper it is to correct them. An

The operating system and applications software within a heart lung machine may consist of millions of lines of code, written by large teams of developers spread around the world. The opportunities for insidious bugs are clearly huge.

Alen Zukich, director at source code analysis specialist Klocwork explores how software testing can help save lives while reducing costs in the rapidly changing medical device industry.

Doctoring the code

30 | Automated Testing

Page 33: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Automated Testing | 31

Many open source and commercial static analysis tools are available for developers of embedded systems. Allowing developers to run accurate, fast analysis within the implement/debug/test cycle can maximise reliability and productivity improvements. Recent growth in the use of static analysis has been most evident in safety critical software including medical devices in particular.

industry rule of thumb is that a bug that costs one pound to fix when first identified, costs a hundred pounds to fix post-integration. Follow that further downstream to the end user, and expenditure can be huge. Imagine the cost of correcting the code in thousands of devices that have been shipped, or even more serious, a device that cause a terrible failure in the field.

Given that software drives many of these systems, manufacturers needs to implement proper verification of their software. The required changes in tools and processes are widely available and used in other mission-critical software industries.

Given the magnitude of the bug detection and removal challenge, automation is an obvious solution. One method for automating code inspection is static source code analysis. It detects and identifies the structural deficiencies and weaknesses in software source code that can cause failures. Static analysis tools find bugs

early, usually long before integration builds are available for execution, and certainly well before traditional testing activities. This is particularly useful in larger projects, where developers write much of the code before they even have a suitably integrated system that can be executed on the medical device.

For organisations developing mission critical embedded software, static source code analysis will help meet reliability and cost reduction demands. This leads to fewer defects reaching system integration, quality assurance and field deployment.

Many open source and commercial static analysis tools are available for developers of embedded systems. Allowing developers to run accurate, fast analysis within the implement/debug/test cycle can maximise reliability and productivity improvements. Recent growth in the use of static analysis has been most evident in safety critical software including medical devices in particular.

©iS

toc

kph

oto

.co

m/b

ee

rko

ff

Page 34: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

32 | Automated Testing

Government and industry body regulation is having an increasing role in dictating how software must be developed for use within industries including medical device manufacturing. Rules dictated by The European Parliament and the Medicines and healthcare products Regulatory Agency (MHRA) are having an increasingly important role to play in the production of medical devices in the UK and Europe.

When these tools are put directly into the developers’ hands, fewer coding vulnerabilities make it into the code stream, leading to more secure code, and greater efficiencies and focus during peer code reviews and testing.

Static code analysis in actionInternational medical device manufacturer, Schiller, is an example of one company using static analysis in software testing for embedded systems effectively.

Software developers at Schiller work on both embedded (programmed using C) and windows-based software (primarily in C++ and Java). Schiller uses Microsoft Visual Studio 2008 for their Windows based software, while its embedded software runs on Linux, using Eclipse.

Rene Schöps, head of software systems research and development at Schiller, has its developers using Klocwork static code analysis to run analysis checks on their code before checking it in. Building each new piece of software from scratch, Schiller’s team have entered the test phase of an embedded project that has been running since 2003, containing over 600,000 lines of code that is due to be released in this summer.

Schöps explained, “The analysis has helped us identify these errors and

fix them, which is essential for us to meet the MISRA standards to which we work. Overall, I’d say Klocwork has helped us bring stability to our system.”

As well as bringing a greater level of quality to their software, Schiller also recognise the potential for speeding up the testing phase of their development cycle. “We see static code analysis as an integral part of our software development set-up at Schiller, a process we are working towards automating as far as possible. We are carrying out regular walk-throughs and code reviews as we get used to working with Klocwork, and I can see that the product is going to help us reduce our development time in future.”

RegulationSoftware is an integral part of everything we do now – drive a car, make a phone call, turn on the TV, and get on an airplane. The rush for technological solutions and conveniences combined with the competitive nature of our society ensures products are brought to market quickly, and medical device manufacturing is no different.

Government and industry body regulation is having an increasing role in dictating how software must be developed for use within

Page 35: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

Automated Testing | 33

Alen ZukichDirectorKlocwork www.klocwork.com

industries including medical device manufacturing. Rules dictated by The European Parliament and the Medicines and healthcare products Regulatory Agency (MHRA) are having an increasingly important role to play in the production of medical devices in the UK and Europe. In the United States, the Food and Drug Administration (FDA) published a well known guideline: General Principles of Software Validation, which outlines recommended best practices for virtually all aspects of software development.

While regulators have classified devices into varying risk levels, low, medium and high, it’s vitally important for manufacturers of these systems to do the best possible job of designing and writing secure software to ensure the validity of the healthcare industry. For many healthcare organisations, this kind of risk management applied to software development is still a relatively new idea.

Developer trainingRegardless of what tools and technologies an organisation decides to use, developer awareness and training is at the heart of the issue. The reason is that most developers think of software reliability in terms of whether a particular issue can cause a failure or whether they’ve developed

code in such a way that it will satisfy the design requirements. Both of those considerations need to happen, but developers now need to ask themselves: “Am I writing this software in a way that is making my code or the whole system less reliable?” The reality is most developers probably can’t answer that question. Many programming bugs can often seem quite innocuous to the untrained eye, so it’s critical that developers are not only provided with the proper tools to write code without serious bugs, but they have the knowledge to remediate issues. This type of ongoing education will go a long way to helping developers understand whether their coding practices are defensive enough to uphold the reliability of our medical devices.

A matter of life or deathAs software moves to centre stage, the value of testing, especially in the production of devices that are literally a matter of life of death, is a key priority. One thing is clear – technology will always outrun the legislation that is put in place to regulate it. But developers that differentiate themselves by embracing high quality testing methods to protect the brand and validity of their organisation, will succeed, and may even save lives in the process.

Page 36: TEST Magazine - June-July 2009

T.E.S.T | June 09

34 | Performance Testing

If you are involved in the performance testing discipline, you will no doubt have been faced with IT managers pressing you to reduce

your effort and timescale estimates. If you are such a manager, you are probably still frustrated as to why this task takes so long, and why only a few people, or expensive consultants, seem to be able to effectively drive the tools required – which in themselves have cost tens or hundreds of thousands of Euros to purchase.

Why does performance testing take so long?Surveys of testing experts indicate that, on average, approximately 10 percent of the overall effort is spent on gathering the requirements and planning the tests, 70 percent on scripting and 20 percent on actually executing tests and analysing the results. We all know that ‘scripting’ means programming. Therefore, on average, only 30 percent of the

performance tester’s time is spent on true QA tasks.

Organisations of all sizes are looking for ways to save cost and generate more revenue. Business is putting pressure on IT to cut costs, stop using external consultants, yet at the same time deliver quality systems in shorter timescales.

Providing estimates of three or four weeks for performance testing at the end of the project, and insisting that development is frozen whilst you are testing, is even less palatable than it used to be. And if you are working in an agile environment, you will already have seen the ‘get with it’ looks when you state performance testing will take longer than a few days!

The reality is that performance testers everywhere are being given less time to confirm whether an application is fit for release.

It is clear that the majority of the effort is expended on creating test scripts, and this takes so much time

If anyone asked us in another context if we agreed that developers and testers had different mindsets and abilities, would we disagree? Yet, we still ask our performance testers to write programs for 70 percent of their time!

With up to 70 percent of the effort for a performance test being spent on scripting, performance consultant Graham Parsons says it’s time to look at alternatives that allow you to focus on testing and analysing results, and stop you becoming a tool junkie?

Don’t be a testing tool junkie

T.E.S.T | June 09

Page 37: TEST Magazine - June-July 2009

T.E.S.T | June 09 June 09 | T.E.S.T

Performance Testing | 35

When you think about it, it is amazing that even more systems do not fail. It would not surprise me if we were to see more and more application performance failures as IT organisations decide there is no alternative to what they currently do and simply cut corners.

because the tools are complex, and you are asking testers to effectively become developers (of test scripts).

If anyone asked us in another context if we agreed that developers and testers had different mindsets and abilities, would we disagree? Yet, we still ask our performance testers to write programs for 70 percent of their time!

If you have not had the extensive training, and gained considerable experience using traditional test tools, you will forever be asking yourself “what is the function that I need to do this?” ‘Fighting’ the tool will distract you from ensuring your test scenarios are realistic.

Performance testing – the realitySome organisations are lucky enough to have the project time, the resources, and the tool experts to plan, configure and execute realistic performance tests. They can test as needed, and release their applications with a high degree of confidence that they will perform and scale as required.

However, it is almost a weekly occurrence that a major organisation’s high-profile application fails due to the load applied by real users. This is particularly true on the web where every outage is reported extensively. As it is reasonable to assume that these applications have all been formally performance tested, why do these failures still keep occurring?

The obvious answer is that the tests executed were not representative of what actually happened when the application was given to real users. I have been called into a number of organisations in the aftermath of a

highly public performance failure to assist them in ensuring this will never occur again. When analysing the cause for the failure, it usually becomes apparent that the tests executed contained one or more of the following ‘defects’:• Not(realistic)enoughvariationofdata supplied to the application;• Notenoughvariationofroutesthrough transactions (users rarely do A, B, C …);• Notenoughcoverageofbusinesstransactions;• Notenoughsimulatedusers.

For those of you that test web applications, ask yourself this question: it is well know that users leave a web site at all stages of a transaction; do your test scripts have a percentage of users continuing no further after every page? If not, you are not truly mirroring real world usage, and therefore your test results could be erroneous (users leaving a site often still have sessions on the server consuming resources).

When you think about it, it is amazing that even more systems do not fail. It would not surprise me if we were to see more and more application performance failures as IT organisations decide there is no alternative to what they currently do and simply cut corners.

What is the alternative?Whether it is because of previous damaging application performance problems, monetary pressures, or lack of tool expert skills, increasingly many organisations are adopting a new approach based around different performance testing process and tools.

The key process change is to mimic

GUI SCREENS ARE LAID OUT TO LEAD YOU THROUGH ALL THAT IS REQUIRED TO MAKE THE TEST CORRECT

EXTRACTING TWO PIECES OF RANDOM, YET RELATED DATA FROM AN APPLICATION RESPONSE WITH NO SCRIPTING!

T.E.S.T | June 09

Page 38: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

36 | Performance Testing

something the functional testers adopted many years ago which has dramatically improved the quality of applications: test earlier and test more often.

It is our experience, across the many hundreds of systems that we have worked with our clients to test, that the vast majority of application performance and scalability problems can be identified with a simulated load of 100 concurrent users.

By providing development teams with a simple mechanism to performance test as they develop (and no extra hardware is needed for 100 user load), performance problems can be flushed out earlier. Not surprisingly, developers usually jump at the opportunity to be able to improve the performance of their code.

Today, a new breed of much easier-to-use, more intuitive, and far more productive tools is now available. These tools have been proven time and again to test large organisations’ mission critical applications. Such tools allow the performance tester to focus on QA tasks (“What should we test; what is ‘correct testing’ for this application; are these results acceptable ?”) and not spend 70 percent of their effort programming test scripts.

True zero scripting testingAll the major tool vendors have realised that their customers are complaining about the complexity of their tools, the fact that you often need expensive external test script ‘consultants’ (programmers!), and that all of this means that performance testing takes a long time and costs a lot of money.

Some tool vendors now claim that 80 percent of the script is automatically generated - which of course sounds great. Sadly, in practise, 80 percent auto-generation can only be achieved for simple test scripts. For most test scripts, you still require the tool expertise and considerable time as you need to program the script manually. Yet it is a start and these vendors should be encouraged to do more.

Other tools have for years claimed ‘no scripting’, fourth generation languages, etc but have been restrictive, have not allowed true performance testing, or had back door

Page 39: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

‘add script code here’ facilities that you ended up using more and more as you developed correct test scenarios.

However, there is a new breed of tools available that require zero scripting and allow you to do all you need to ensure your tests are correct. These are enterprise class, proven in many organisations, and are being used every day to test mission critical applications. All without a single line of test script!

These tools are aimed at you the testers, are much quicker to use, make it easier for you to create correct test scenarios, imitate all the user actions you need to, and simulate more business transactions.

How zero scripting tools workAll of the enterprise class zero scripting tools work by removing the script editor and replacing it with GUI screens that you use to configure your simulated business transactions. In the better tools, the screens are laid out in an intuitive manner to prompt you to think about correct testing all the time. You have access to true context sensitive in-tool help as opposed to having to wade through bulky reference manuals.

These tools still allow you to do all you need to such as data variation, extracting data from application responses, validating responses, auto-correlating session variables, specifying alternative routes through transactions, etc. The difference is that you are configuring using a GUI, as opposed to programming in a script editor.

Take a look at an average performance test script – how many lines are actually making application requests? Very few when compared to all of those that are initialising and updating variables, extracting response data, looping, forking, etc. It is no wonder that scripting takes so much effort and that even the best and most conscientious testers can make (programming) mistakes.

Zero scripting benefitsWhat benefit do these new tools deliver to the professional performance tester who wishes to focus on correctly testing applications to ensure they can be released with confidence?

The obvious benefit is that zero

scripting tools have the flexibility to be configured to do whatever you need them to, yet you do not write a line of script code. And it goes without saying that configuring tests using GUI screens, which will prevent you from saving incorrect information, is far easier to learn and quicker to use than writing test scripts for most of us.

What does that means in terms of reduced effort? Well as the head of performance testing at a major UK testing consultancy stated: “A job that would take 10 days to prepare using LoadRunner, takes two or three days using StressTester - a leading zero scripting tool.”

Remember that on average 70 percent of the performance tester’s effort is spent on scripting. On a four week testing project with two testers, this would equate to 28 days of scripting effort. Using a zero scripting tool, this could be reduced to under six days – saving 22 man days – over half the test project’s total effort! Undoubtedly this also reduces the overall duration of the testing project, and adopters of zero scripting tools report that this reduction is usually over 50 percent.

Due to these significant savings, and the fact that with zero scripting

tools you no longer need expensive tool-specialist consultants, it is not uncommon for a zero scripting tool to pay for itself in cost savings after just two or three test projects. A side effect of adoption of such is that now you will spend the majority of your time on true QA tasks, and less time preparing for testing – hopefully something you would all welcome.

The effort and time saving, coupled with the intuitive ‘prompting’ layout of the GUIs in the new tools, delivers a major benefit to organisations: the tests that are executed are more likely to correctly simulate the real world usage of the system, and hence the results of testing can be used with more confidence.

I can only speak for the tool I know very well - StressTester - but it is a fact that no system tested with it has ever experienced a performance or scalability problem after being released into the production environment (and it has been used to test hundreds of well known mission critical financial, publishing, manufacturing, and travel applications). This is not only because of the tool, it is because the tool’s ease and speed of use allows the testers – you and I – to focus on what is correct testing, simulate more business transactions, execute more tests, and hence not need to cut the corners most of us are being forced to do at present.

In addition, by adopting an easy to learn and use tool in the development teams, without the need for QA environments or data, developers can quickly and easily spot the majority of the application performance and scalability problems. These can then be corrected much earlier in the development process, which will again shorten the overall project delivery time.

Less effort, less costWhether you are under pressure to cut testing timescales (and hence probably sacrifice what you know is ‘correct testing’), are adopting an agile process, or just wish to reclaim the role you signed up for (tester – not developer), there is now a proven option that can allow you to test more often, simulate more transactions, test correctly, in shorter timescales, with reduced effort and less cost.

Whether you are under pressure to cut testing timescales (and hence probably sacrifice what you know is ‘correct testing’), are adopting an agile process, or just wish to reclaim the role you signed up for (tester – not developer), there is now a proven option that can allow you to test more often, simulate more transactions, test correctly, in shorter timescales, with reduced effort and less cost.

Graham Parsons Performance consultantReflective Solutionswww.reflective.com

Performance Testing | 37

Page 40: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

38 | Testing IT

Alarge proportion of changes that are made to IT systems are done for economic benefit. This might not be

true for regulatory requirements or for changes of a purely strategic nature, where return on investment is not expected in the short term. In all cases there are a number of other business drivers apart from revenue and profit considerations that affect the IT department. In essence they all add up to business confidence, which is the ability for the change managers within the business to confidently state that when the business starts to use the new approach there will be at least:• Asmoothtransitiontoaworking

system;• Minimaldatalossandideallybetter

performance but certainly no degradation in user experience;• Alowriskofareductioninrevenue;• Theabilitytoincreaserevenueby

scaling the solution to the required levels;• Alowriskofbeingpenalisedforlack

of compliance to security, legal and regulatory standards; • Theplanningofcontingency

measures (eg process or solution reversion) for a range of issues that may arise after delivery.

Using renowned suppliers and ensuring best practice development and project management techniques can go a long way to providing business confidence. However, these tend to result in qualitative improvements to business confidence and project risk. For example, project Y is bound to have fewer production defects than project X because we using developers with more experience.

Testing quantifies business confidenceMeasuring business confidence is dependent on testing to quantify the exposure to the risks identified. Figure 1 shows the principal features that make up good testing practice. It is important that testing is independent as this means that the team that carries out the testing can be measured against quality criteria (eg test coverage and defects found) and not whether the project is on time.

An independent testing team has

no motivation to mask the truth but would rather provide clear and rapid feedback about the results that they are seeing.

It is now received wisdom that testing early has significant financial advantages due to early fault removal. However, these advantages can be eroded by a number of factors such as:• Duplicationoftesting;• Inappropriateapplicationof

automation;• Poorrelease,environmentordata

management.This means test management needs

to enforce quality gates to ensure application testability and needs to work with all stakeholders to minimise the impacts of dependencies.

Central to testing is the ability to be sure that the testing carried out provides information that is relevant to the application that will finally be deployed. This requires experience to avoid common pitfalls but also requires the ability for the test team to innovate. This may be something as simple as a change to reporting techniques to make it clear where defects are being found.

In a time of radical business change and higher numbers of mergers and acquisitions it is vital that the IT arm of the business can provide genuine business confidence on a number of fronts not the least of which are business continuity and scalability. Here, SQS business unit director Dave Rigler explores some of the ways in which IT systems can be merged and how good quality management can provide the required level of business confidence.

Quality management and the urge to merge

Page 41: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

SOA provides all of the advantages of the other approaches by allowing some parts to be replaced, selecting one of the existing processes if there is a clear winner and allowing different technology stacks to talk to each other.

Alternatively, innovation may be in the form of new approaches to testing. For example, power consumption and other environmental factors cannot be ignored for large system implementations and a trade-off may need to be done between user experience in the form of response times and the power used by the main servers.

Merging existing and acquired IT estatesThere are essentially four approaches to merging IT estates. These are summarised in figure 2.

The first approach is for one IT estate to swallow the other one. Like a large python swallowing a large mammal it can take quite a long time to digest another IT estate. The larger the estate that is ‘swallowed’ the greater the pressure on the existing systems eg much larger databases or significantly higher concurrency.

The next approach is to use both IT estates only as a reference point for defining a brand new system. This ‘Genesis’ approach happens frequently. There are two advantages

for this approach: it is possible to take advantage of advances in technology and also the pain associated with supporting legacy applications is removed.

The third approach is to enable one IT estate to talk to the other. This is done by writing bespoke interfaces and creating potentially complex business processes. The advantages of this are that one of the IT systems is left relatively unchanged.

The final approach would be to enable both systems to be able to talk to each other. This rapidly leads to large numbers of interfaces and complex business processes that need to be managed. Fortunately, developments in service oriented architecture (SOA) allow for a standards based approach to be used. SOA provides all of the advantages of the other approaches by allowing some parts to be replaced, selecting one of the existing processes if there is a clear winner and allowing different technology stacks to talk to each other.

It should be noted that SOA is essentially a strategic move for a

Testing IT | 39

Page 42: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

business, which means that there is an initial quality investment that needs to be understood by the business and pay back for that investment comes in the form of future business agility. This increases the importance for the business to be at the forefront of the SOA initiative. If business is not involved in defining the SOA with architects then the chances of realising all of the benefits are reduced.

Merging existing and acquired data setsIt is essential that the importance of merging or migrating data sets is not under stated. It is recommended that an entire work stream is dedicated to ensuring that data retains its integrity. The responsibilities of this work stream are to:• Investigatetheinitialdataquality;• Optimisethesourcesystemswhere

appropriate;• Migratethedatatothetarget

system;• Optimisethetargetsystemdata;• Managethetransitiontobusinessas

usual for the new system.There is a large amount of testing

that is required for the data migration work stream. However, there will be significant overlap with other work streams and it is important that close collaboration is enforced between all work streams to prevent the duplication of tests.

Testing for merged IT estatesAn internal review of projects at SQS revealed that the main factors that were important for testing of merged systems were:• Compliancetesting;

• Integrationandmigrationtesting;• Performancetesting.

Compliance is important for both internal security requirements and external regulatory or legal requirements. If any of these requirements are not satisfied then your entire business may be in jeopardy. This could stem from customers abandoning a business that cannot keep their data secure or from high fines applied for not complying with regulation. It is important to note that failure to comply with some requirements (eg Data Protection Act) could lead to imprisonment of executives.

Integration and migration testing, which is essentially functional testing of all business requirements, could be considered obvious. However, it is perhaps its ordering before performance that is relevant. The business needs to work – ie all business processes that are supported by the IT systems need to function as expected. In the first instance it does not matter that they perform well – only that they work. This is summarised by the motor racing adage : “If you want to finish first, first you have to finish.”

Performance testing allows you to increase profitability of your business in two ways:• Increaserevenuebytradingmore

effectively;• Increaseprofitsbyreducing

overheads and capital expenditure.If your IT system is more responsive

than your competitors’ then you are likely to retain and recruit more customers than them, thereby increasing your market share and revenue.

A reduction in overheads and capital expenditure is possible because you can get optimal performance from your system through the identification and removal of bottlenecks which obviates the need for increased system resources.

Quality management strategyFigure 3 shows the high level strategy for merging IT estates. The approach is to measure the existing systems and use them to define the requirements for the merged system. The benefits are all realised by proving that the defined system has been delivered. The measuring step on its own does not add significant benefits but it does make the defined system achievable.

Measure - In the measurement phase it may be sensible to assess the separate systems in the opposite order to the testing that will be carried out for the final system. In essence it means that assessing the performance of the systems is of the highest importance. This is for two reasons. Firstly, it may be possible to identify systems that have sufficient capability to handle the volumes of the merged estate. This can drive cost savings. The second reason is to collect information about the present users’ experience to help define the non functional requirements.

It is also important to establish the test cases that can be used to prove that the business processes still operate and also to look at the compliance issues that are required for each estate.

The toolkit for the measurement phase should be expanded to cover all aspects of static analysis. These

40 | Testing IT

All business processes that are supported by the IT systems need to function as expected. In the first instance it does not matter that they perform well – only that they work. This is summarised by the motor racing adage : “If you want to finish first, first you have to finish.”

Page 43: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

techniques provide the opportunity to assess the quality of code, data and the existing architecture. There is a range of tools that can assist in this process, however, there is no substitute for ensuring that the correct expertise is focused on the static analysis. Just because the code has always taken a lot of maintenance, it doesn’t mean that you have to accept this for a new or merged IT estate.

Define – In this phase the measurements are used to establish the requirements for the merged system. It is vital at this stage that the opportunity to incorporate new technologies and developments in best practice are considered.

From the static analysis carried out in the measure phase it is possible to define requirements for improvements in areas such as code maintainability and data integrity. If change is being managed within the business then there it represents an ideal opportunity to improve the overall IT capabilities of the business.

Prove – This is where all of the requirements are tested ideally in descending risk order. As previously discussed compliance, migration & integration testing and then performance testing is a sensible approach.

Testing in a service oriented architectureThe diagram in figure 4 highlights some of the main features of SOA. There are a number of interrelated features of SOA, all of which must be understood by the quality management team. Testing of SOA must consider the individual services and the overall orchestration of those services into a business related IT system.

There are two separate lifecycles to be considered. There is the lifecycle of the overall system to be developed, which should follow a top down integration approach using mock services where development of new services has not been completed. Then there is the lifecycle of an individual service. A service should not be integrated into the overall solution until it has completed an extensive service test phase. This is essentially a competing bottom up development and testing lifecycle for each service. Typically the bottom up testing lifecycle is the starting point in the actual test execution. As this lifecycle is worked through, mock services facilitate the top down service orchestration test lifecycle.

Service testing is a new phase of testing in addition to the unit, component, system, integration and acceptance phases that are normally considered as part of the quality management strategy. As a result, service testing will often be run in parallel with the existing test phases but not as a phase with a fixed position in the overall quality management strategy.

In short the bad news is that there is more testing to do for the initial implementation of SOA. However, the deployment of well tested services that meet the aim of being portable, means that they can be re-used more readily providing greater business agility. In addition, because services can be provided by a wide range of technologies it is possible to be more flexible in the development approach without dictating the languages, operating systems and technology stacks that will be used for all other businesses. This flexible development approach allows for businesses to ensure processes are retained or modified as required during the development process.

Increased agility, flexibility and confident re-use are the real benefits of SOA implementation.

The urge to mergeMerging IT estates allows the requirements for the final merged system to be developed based on an assessment of existing systems. Compliance, functional integration and migration testing are important but once the functional requirements have been signed off it is necessary to improve potential profitability by using performance testing to identify and prosecute bottlenecks in the system.

During high levels of change, when governance is being strictly applied, it is sensible to take the opportunity provided to deploy new technologies and incorporate best practice wherever appropriate. SOA represents a strategic technology investment and it is important to note that in terms of quality management the full benefits realisation will not be apparent until the business can utilise the increased agility to respond to market conditions quicker than their competitors.

Using SOA can increase business confidence because it facilitates best practice and allows for early service and integration testing that will aid early quantification of risk exposure for the business due to new and restructured elements of the IT estate.

Testing IT | 41

FIGURE 1

FIGURE 2

Dave Rigler Business unit director SQS Group www.sqs-uk.com

FIGURE 3

FIGURE 4

Page 44: TEST Magazine - June-July 2009

T.E.S.T | June 09

t.

Page 45: TEST Magazine - June-July 2009

T.E.S.T | June 09

ISEB is part of the British Computer Society (BCS) and is a leading worldwide exam body for IT professionals.

With over 380,000 exams delivered worldwide in over 50 countries, including Australia, Japan, South Africa, USA, Brazil and many others, ISEB continues to lead the way in exams for IT professionals.

Our qualifications are internationally recognised and cover eight major subject areas in: Software Testing, ITIL/ IT Service Management, IT Assets and Infrastructure, Systems Development, Business Analysis, Project Management, Information and Security and IT Governance.

These are available at Foundation, Practitioner and Higher Level to suit each individual candidate. ISEB Professional Level is also available. For more information visit www.iseb-exams.com.

These qualifications allow candidates to learn new skills in specific business and IT areas which measure competence, ability and performance. This helps to promote career development and provide a competitive edge for employees.

Delivered via a network of accredited training and examinations providers, the breadth and depth of ISEB’s portfolio encourages knowledge, understanding and application in various disciplines.

BCS

BCS is the leading professional body for those working in IT and communications. Established in 1957and now with over 66,000 members in more than 100 countries, BCS aims to lead the development of the IT profession and make IT the profession of the 21st century.

BCS is the industry body for IT Professionals and Chartered Engineering Institute for IT. We are responsible for setting standards for the IT profession. Our vision is to see the IT profession recognised as being a profession of the highest integrity and competence.

BCS membership for software testers

BCS membership gives you an important edge; it shows you are serious about your career in IT and are committed to your own professional development, confirming your status as an IT practitioner of the highest integrity.

Our growing range of services and benefits are designed to be directly relevant at every stage of your career.

Industry recognition

Post-nominals – AMBCS, MBCS, FBCS & CITP - are recognised worldwide, giving you industry status and setting you apart from your peers.

BCS received its Royal Charter in 1984 and is currently the only awarding body for Chartered IT Professional (CITP) status, also offering a route to related Chartered registrations, CEng and CSci.

Membership grades

Professional membership (MBCS) is our main professional entry grade and the route to Chartered (CITP) status. Professional membership is for competent IT practitioners who typically have five or more years of IT work experience. Relevant qualifications, eg a computing-related degree, reduce this requirement to two or three years of experience. Associate membership (AMBCS) is available for those just beginning their career in IT, requiring just one year’s experience.

Joining is straightforward – for more information visit: www.bcs.org/membership where you can apply online or download an application form.

Best practice

By signing up to our Code of Conduct and Code of Good Practice, you declare your concern for public interest and your commitment to keeping pace with the increasing expectations and requirements of your profession.

Networking opportunities

Our 44 branches, 16 international sections and over 40 specialist groups including Software Testing (SIGIST) and Methods & Tools, provide access to a wealth of experience and expertise. These unrivalled networking opportunities help you to keep abreast of current developments, discuss topical issues and make useful contacts.

Specialist Group in Software Testing (SIGIST)

With over 2,500 members SIGIST is the largest specialist group in the BCS. Objectives of the group include promoting the importance of software testing, developing the awareness of the industry’s best practice and promoting and developing high standards and professionalism in software testing. For more information please visit: www.sigist.org.uk.

Information services

The BCS online library is another invaluable resource for IT professionals, comprising over 200 e-books plus Forrester reports and EBSCO databases.

BCS members also receive a 20 percent discount on all BCS book publications. This includes Software Testing, an ISEB Foundation. As well as explaining the basic steps of the testing process and how to perform effective tests, this book provides an overview of different techniques, both dynamic and static, and how to apply them.

Career development

A host of career development tools are available through BCS including full access to SFIA (the Skills Framework for the Information Age) which details the necessary skills and training required to progress your career.

ISEB

BCS, First Floor, Block D, North Star House, North Star Avenue, Swindon, SN2 1FA, United KingdomTel: +44 (0) 1793 417655 Fax: +44 (0) 1793 417559 Email: [email protected] Web: www.iseb-exams.com

June 09 | T.E.S.T

T.E.S.T company profile | 43

Page 46: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

44 | T.E.S.T company profile

Original Software’s corporate goal is to provide a software test automation solution that will enable everyone involved in the development and implementation of new or enhanced applications to participate in the quality process. With industry-leading innovation such as our code free technology and self healing scripts, we have broken down barriers and made test automation available to the widest possible audience. Short learning curves and a reduction in the need for specialised skills will only improve your return on investment.

Total Quality is both a mindset and an agenda. An unacceptable proportion of testing is focused on the visual layer simply because it is the only area that legacy automation software can address adequately. It is our view that every aspect of an application must be tested, from the visual layer through to the underlying processes and database.

This is our commitment. We want to empower the QA and IT professionals to build and maintain a corporate asset: a series of re-usable software quality tests that will embody business knowledge and continue to perform as applications are modified to reflect ever evolving business requirements.

Planning and Management:A coherent planning and management platform is fundamental to the success of any quality enhancement project. Unless the current extent and success of testing can be properly managed, measured and the results viewed instantly, problems will occur.

Qualify is a complete Application Quality Management (AQM) solution that unites requirements, cases, test execution, defect management and reporting within one platform. With fully configurable graphical dashboards, calls leading integrated communication channels and multi format reporting, you now have all the information you need to make truly informed application quality decisions.

Manual Testing:TestDrive-Assist is a totally new concept in testing that offers a significant helping hand for manual testing. It automatically recognises when registered applications are launched, It then softly tracks every user action at both the visual layer and at the database level, using screen captures to fully analyse page content. These test tracks can be saved to create audit trails of test results or used as a key attachment to created defects. Once saved, they can be opened as full scripts so the reproduction of an error is accurate and immediate. Testers see time savings in their manual testing straight away and developers are able to correct defects far faster from these documented results.

Next Generation Functional & Regression Test Automation:Software testing solutions need to be dynamic and flexible. They need to be deployed easily and quickly amongst the broadest possible user base and they need to be easy to use and be able to cope with the rapid rate of change that occurs in the modern workplace.

Functional and regression test timescales can be significantly reduced through effective test automation. TestDrive uniquely solves the twin issues of complexity and maintenance which have been a stumbling block with previous generation tools. Our totally code-free, state of the art user interface empowers the subject matter experts, enabling them to define and execute sophisticated tests without the need to use any kind of code or scripting. Our ground breaking self-healing script technology enables your investment in test automation to rapidly adapt to new upgrades and releases, so you gain time from minimum maintenance and maximum reuse.

Pair-wise Testing:TestSmart generates optimised data combinations using a “pair-wise” algorithm and will export these values into variable data for use with TestDrive. This scientific approach to testing means that every pair of valid combinations of data will be tested giving excellent coverage without exploring the often massive numbers associated with attempting to test all theoretical combinations.

Test Data Management:Test Data Management (TDM) is fundamental to the success of your data strategy; after all, data drives the entire testing procedure. With bad data comes poor testing, results you cannot trust, and a whole lot of wasted time, money and effort. It pays to get data management right.

Effective test data creation will address issues of disk space, data verification, data confidentiality and protracted test durations. Control of test data ensures that every test starts with a consistent data state, essential in maintaining your data in a predictable state at the end of the test. Checking both visible test results and the database effects is a key principle of AQM, a task which is practically impossible to do manually.

TestBench uniquely controls, tests and manages the data required for effective testing together with the server-side effects of the test. With TestBench, testing on live data is now a thing of the past with the creation of subsets that retain referential integrity providing a perfect miniature copy of the live environment. Data scrambling and de-sensitivity, along with auto-analysis and data manipulation and extraction reports for auditing, means you can be confident your testing processes with be compliant with all your regulatory controls.

Original Software

Our Commitment: Total Software Quality – Available to Everyone

Original Software www.origsoft.com Web for manual testers: www.manualtesting.com [email protected]

Grove House, Chineham Court, Basingstoke, RG24 8AG, United Kingdom Phone: +44 (0)1256 338666 Fax: +44 (0)1256 338678

Page 47: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

T.E.S.T company profile | 45

The Software Testing Club is a relaxed yet professional place for software testers to hang out, find likeminded software testers and get involved in thoughtful and often fun conversations.

Interesting things happen at The Software Testing Club. It started out as an experiment. Now two years on it has turned into vibrant online community of software testing professionals.

You'll find members are dedicated to their profession and you can find them in deep conversation within the forums. However, it's more than just forums and your standard niche social network. As the club grows we are finding things happening. This includes things like a Job Board, a Mentoring Group, a collaborative Software Testing Book and a crowd sourced testing initiative called Flash Mob Testing.

The Software Testing Club is a grassroots effort. It's for the members and grows according to what we believe they want.

Come join and let us know what you think.

Rosie Sherry – Founder & Community Manager Email: [email protected] Tel: +44 (0)7730952537 Web: www.softwaretestingclub.com

The Software Testing Club

SOA Quality as a continuous processParasoft empowers organisations to deliver better business applications faster. We achieve this by enabling quality as a continuous process across the SDLC–not just QA. Our solutions promote strong code foundations, solid functional components and robust business processes. Parasoft's SOA solution provides an automated infrastructure that enables SOA quality as a continuous process, allowing you to reap the full benefits of your SOA initiative.

Error preventionParasoft's SOA solution allows you to discover and augment expectations around design/development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.

Continuous regression testingParasoft's SOA solution assists you in managing the complex and distributed nature of SOA. Given that your SOA is more than likely to span multiple applications, departments, organisations and business partners, this requires a component-based testing strategy. With the Parasoft solution set, you can execute a component-based testing strategy that ultimately allows you to focus on the impact of change. Parasoft's continuous regression tests are applied to the multiple layers throughout your system. These tests will then immediately alert you when modifications impact application behaviour providing a safety net that reduces the risk of change and enables rapid and agile responses to business demands.

Functional auditParasoft's continuous quality practices promote the reuse of test assets as building blocks to streamline the validation of end-to-end business scenarios impacted by changing business requirements. Functional test suites can be leveraged into load tests without the use of scripting, allowing you to track performance metrics throughout the SDLC. This enables your team to execute a more complete audit of your business application, reduces the risk of business downtime, and ensures business continuity.

Process visibility and controlParasoft's SOA solution enforces established quality criteria and policies, such as security, interoperability, and maintainability, on the various business application artefacts within an SOA. Adherence to such policies is critical to achieving consistency as well as ensuring reuse and interoperability. As a result, you evolve a visible and sustainable quality process that delivers predictable outcomes.

Please contact us to arrange either a one-to-one briefing session or a free evaluation.

Web: www.parasoft.com Email: [email protected] Tel: +44 (0) 208 263 6005 Fax: +44 (0) 208 263 6100

Parasoft

Page 48: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

46 | T.E.S.T company profile

www.seapine.com

United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UKPhone:+44 (0) 208-899-6775 Email: [email protected]

Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655

With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. All customers receive the same consistently high level of responsiveness and customer care.

Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments and run on Microsoft Windows®, Linux®, Sun Solaris®, and Apple Macintosh® platforms.

Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.

TestTrack TCM

A scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. TestTrack TCM's powerful workflow and extensive customisation capabilities make it easy to bring traceability to the testing process. Reporting and graphing capabilities, along with user-definable data filters, give you the tools you need to easily measure the progress and quality of your testing effort.

TestTrack Pro

A powerful, configurable, and easy to use issue management solution. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes. Cross-platform client and server support provides full functionality for Windows, Mac OS X, Linux, and Solaris users, and enables hosting on your preferred operating system.

QA Wizard Pro

Automates the functional and regressions testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage and deliver high quality solutions faster. Featuring a next-generation scripting language and an easy-to-use Grid View, QA Wizard Pro includes advanced object searching, smart matching a global application repository, data-driven testing support, validation checkpoints, and built-in debugging. It also includes batch file support, a real-time status tool, and remote execution support.

Surround SCM

Controls access to source code files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, custom metadata, labels, and virtual branching streamline parallel development and provide complete control over the software change process.

Seapine SoftwareTM

Page 49: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09 June 09 | T.E.S.T

T.E.S.T company profile | 47

Leysen has specialised in software and system testing recruitment for over 13 years. One of the first recruitment companies dedicated to the testing and QA market, we are still one of the market leaders.

Testing RecruitmentThe Team – Our extensive experience in this niche market, allows us to keep one step ahead of the competition. Over time we have built excellent relationships with both our clients and candidates. This means we can efficiently assess and understand your requirement, and either through the database, or by using our network of contacts, quickly respond to your needs. Forty percent of our placements last year were people that were not actively looking for a new role ie, their CVs were not active in the market, but matched with suitable roles by our team.

Our teams keep up to date with current developments by attending testing conferences and have had an involvement with the running of the BCS SIGIST for the last ten years.

Whatever the extent of your testing requirement, Leysen can help –Whether you require a permanent individual to carry out system testing, or a complete team of contractors, call our resource team now on 01483 211888 or email your vacancy detail to [email protected]

Our database boasts –• Over15,000pre-screenedtestingcandidates,updatedandincreasingdaily;• Candidateswithaminimumoftwoyears’recentsolidtestingexperience,the

majority with more experience; • AsignificantnumberofcandidateswhoareISEBqualifiedintheFoundation,

Intermediate or Practitioner certificates in software testing;• Candidateswithawideandvariedrangeofskillsandabilitiestosuityourneeds;• Candidateswithexperienceintestingonallplatformsandtestingtools.

Clients –As you can imagine our client list is extensive. We have placed candidates and built teams of testers across the UK, Europe and in the US from multinational blue chips to local software houses.

Testing training –In addition to our main business of recruitment, Leysen believes in promoting testing as a profession and getting testers qualified to ISEB/ISTQB standard. We offer training for Foundation, Intermediate or Practitioner level. This training is either CDROM self-study, online or classroom-based.

The self-study CDROMs for Foundation and intermediate are available at £150* and £195* respectively (*plus £2.50P&P, plus VAT). For free evaluation please see: www.leysen.com/selfstudyiseb.asp

Email: [email protected] Web: www.leysen.com Tel: +44 (0) 1483 211888 Fax: +44 (0) 01483 211887

Leysen Associates

Network emulation & application testing toolsiTrinegy is Europe’s leading producer of network emulator technology which enables testers and QA specialists to conduct realistic pre-deployment testing in order to confirm that an application is going to behave satisfactorily when placed in the final production network.

Delivering more realistic testingIncreasingly, applications are being delivered over wide area networks (WANs), wireless LANs (WLAN), GPRS, 3G, satellite networks etc, where network characteristics such as bandwidth, latency, jitter and packet error or loss can have a big impact on their performance. So, there is a growing need to test software in these environments.

iTrinegy Network Emulators enable you to quickly and easily recreate a wide range of network environments for testing applications, including VoIP, in the test lab or even at your desktop.

Ease of useOur network emulators have been developed for ease of use:

• Noneedtobeanetworkexpertinordertousethem• Pre-suppliedwithanextensiverangeofpredefinedtestnetworkscenariostogetyoustarted• Easytocreateyourowncustomtestscenarios• Alltestscenarioscanbesavedforsubsequentreuse• Automatedchangesinnetworkconditionscanbeappliedtoreflecttherealworld• Workseamlesslywithloadgenerationandperformancetoolstofurtherenhancesoftwaretesting.

A comprehensive range to suit your needsiTrinegy’s comprehensive range of network emulators is designed to suit your needs and budget. It includes:

• Softwareforinstallationonyourowndesktoporlaptop(trialcopiesavailable)• Small,portableinlineemulatorsthatsitsilentlyonthedesktopandcanbesharedamongstthetestteam• Largerportableunitscapableofeasilyrecreatingcomplexmulti-path,multi-site,multi-usernetworksforfullenterprisetesting• Highperformancerack-mountunitsdesignedtobeinstalledindedicatedtestlabs• Veryhighperformanceunitscapableofreplicatinghighspeed,highvolumenetworksmakingthemidealfortesting

applications in converged environments.

If you would like more information on how our technology can help you ensure the software you are testing is ‘WAN-ready’

and going to work in the field, please contact iTrinegy using the details below:

Email: [email protected] Tel: +44 (0)1799 543 345 Web: www.itrinegy.com

iTrinegy

Page 50: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09

48 | The Last Word

Everyone seems to be on about it: social media; collaboration; crowd sourcing. But what does it all really mean? Rosie Sherry’s been finding out.

Working together, better

They say anyone can work from anywhere. This is true, but only partially. Of course it doesn’t guarantee the

work done will be any good. First of all, who is anyone? Is that anyone as in everyone? Or anyone you know? Or anyone you believe is capable? And where is anywhere? Anywhere with an internet connection? You catch my drift. And of course, it’s a whole different story when you put these thoughts together within the world of testing.

Crowd sourcingThere are so many people out there who want to work and make money. Some clever and productive people too. Problem is that great, talented and hard workers are few and far between.

uTest is probably the most widely recognised crowd sourcing testing company. Mob4Hire is another company that specialises in the mobile area. Of course there are other sites that aren’t testing specific, such as oDesk.

QualityThe problem with many of the above services is that it is difficult to truly understand the people behind the computer. Yes there can be stats, good feedback, bad feedback, a history, a profile description. Isn’t that really just like a CV? How do we get the equivalent of an interview, or perhaps a chat over a cuppa?

Many freelance-focused sites boast about how many members they have. They say they are the largest, that they have thousands or millions of

members. Or that they have been around since 1901. Since when did quantity and longevity equal quality?

The problematic peopleThe social web has definitely made a huge step in the way of trying to solve some of these problems. It’s pretty easy to set up collaboration tools. There’s something for everybody – wikis, document management, social networks, web conferencing, teleconferencing.

All these tools and technology would have cost you a (small) fortune a few years ago (if they even existed). Now they can be used for free or for very reasonable costs. The problem in the past used to be the technology. However, now the problem comes down to people and the psychology behind getting people to work better and adapt to change.

We mustn’t forget that things are changing so fast at the moment. Not only do people naturally fight against change, but work and technology life is changing so fast that it’s a job in itself to keep up with everything.

Figuring things outWe have the technology. Now we just need to figure out how to make it all work with people.

– How do we find the best people in the field?

– How can we be confident they are as good as they say?

– What are their strengths or weaknesses?

– How do they fit into the rest of the team?

– Do we trust them?– Are they reliable?

– Is it possible to have those ‘coffee chats’ in a different way?

– Can they communicate online and in real life?

– How can every team member use the technology consistently and effectively?

The answersI don’t have the answers. Who does? I do believe that baby steps help. As does the freedom to experiment to understand what will and won’t work. Of course, taking a reality check that we are (mostly) working with other people won’t do you any harm.

Rosie Sherry Founder & community managerThe Software Testing Clubwww.softwaretestingclub.com

The social web has definitely made a huge step in the way of trying to solve some of these problems. It’s pretty easy to set up collaboration tools. There’s something for everybody – wikis, document management, social networks, web conferencing, teleconferencing.

the last word...

Page 51: TEST Magazine - June-July 2009

T.E.S.T | June 09T.E.S.T | June 09■

Page 52: TEST Magazine - June-July 2009