Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges...

8
1520-9202/06/$20.00 © 2006 IEEE Published by the IEEE Computer Society 10 IT Pro March April 2006 Testing Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented architectures (SOAs), software is used and not owned, and operation happens on machines that are out of the user’s control. Because a lack of trust prevents service computing’s mainstream adoption, a key issue is providing users and system integrators the means to build confidence that a service delivers a function with the expected quality of service (QoS). Unfortunately, many estab- lished testing methods and tools don’t work with services. For example, to users and systems integrators, services are just interfaces. This hinders white- box testing methods based on code structure and data flow knowledge. Lack of access to source code also prevents classi- cal mutation-testing approaches, which require seeding the code with errors. In this article, we provide users and system integrators with an overview of SOA testing’s fun- damental technical issues and solutions, focusing on Web serv- ices as a practical implementa- tion of the SOA model. We discuss SOA testing across two dimensions: Testing perspectives.Various stakeholders,such as service providers and end users, have differ- ent needs and raise different testing require- ments. Testing level. Each SOA testing level, such as integration and regression testing, poses unique challenges. TESTING PERSPECTIVES SOA testing has similarities to commercial off- the shelf (COTS) testing.The provider can test a component only independently of the applica- tions in which it will be used, and the system inte- grator is not able to access the source code to analyze and retest it. However, COTS are inte- grated into the user’s system deployment infra- structure, while services live in a foreign infrastructure.Thus, the QoS of services can vary over time more sensibly and unpredictably than COTS.This QoS issue calls for specific testing to guarantee the service-level agreements (SLAs) stipulated with consumers. Different stakeholders might want to test indi- vidual services or service-centric systems to ensure or verify SLA adherence. Each perspec- tive has its specific requirements and issues. Service developer Aiming to release a highly reliable service, the service developer tests the service to detect the maximum possible number of failures. Among the other things, the developer also tries to assess the service’s nonfunctional properties and its Service-oriented architectures’ unique features, such as dynamic and ultra-late binding, raise the need for new testing methods and tools. SOA Testing Technologies Further Reading Web Services- Interoperability Inside

Transcript of Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges...

Page 1: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

1520-9202/06/$20.00 © 2006 IEEEP u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y10 IT Pro March ❘ April 2006

TestingServices andService-CentricSystems:Challenges andOpportunitiesGerardo Canfora and Massimiliano Di Penta

W ith service-oriented architectures(SOAs), software is used and notowned, and operation happens onmachines that are out of the user’s

control. Because a lack of trust prevents servicecomputing’s mainstream adoption, a key issue isproviding users and system integrators the means

to build confidence that a servicedelivers a function with theexpected quality of service(QoS).

Unfortunately, many estab-lished testing methods and toolsdon’t work with services. Forexample, to users and systemsintegrators, services are justinterfaces. This hinders white-box testing methods based oncode structure and data flowknowledge. Lack of access tosource code also prevents classi-cal mutation-testing approaches,which require seeding the codewith errors.

In this article, we provide usersand system integrators with anoverview of SOA testing’s fun-damental technical issues andsolutions, focusing on Web serv-ices as a practical implementa-tion of the SOA model. Wediscuss SOA testing across twodimensions:

• Testing perspectives.Various stakeholders, suchas service providers and end users, have differ-ent needs and raise different testing require-ments.

• Testing level. Each SOA testing level, such asintegration and regression testing,poses uniquechallenges.

TESTING PERSPECTIVESSOA testing has similarities to commercial off-

the shelf (COTS) testing.The provider can test acomponent only independently of the applica-tions in which it will be used, and the system inte-grator is not able to access the source code toanalyze and retest it. However, COTS are inte-grated into the user’s system deployment infra-structure, while services live in a foreigninfrastructure.Thus, the QoS of services can varyover time more sensibly and unpredictably thanCOTS.This QoS issue calls for specific testing toguarantee the service-level agreements (SLAs)stipulated with consumers.

Different stakeholders might want to test indi-vidual services or service-centric systems toensure or verify SLA adherence. Each perspec-tive has its specific requirements and issues.

Service developerAiming to release a highly reliable service, the

service developer tests the service to detect themaximum possible number of failures. Amongthe other things, the developer also tries to assessthe service’s nonfunctional properties and its

Service-orientedarchitectures’ unique features,such as dynamicand ultra-late binding, raise theneed for new testing methods and tools.

SOA TestingTechnologies

Further ReadingWeb Services-

Interoperability

Inside

Page 2: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

March ❘ April 2006 IT Pro 11

ability to properly handle exceptions.Althoughtesting costs are limited in this case (the devel-oper does not have to pay when testing his ownservice), nonfunctional testing isn’t realisticbecause it doesn’t account for the provider andconsumer infrastructure, and the network con-figuration or load.

Service providerThe service provider tests the service to ensure

it can guarantee the requirements stipulated inthe SLA with the consumer.Testing costs are lim-ited. However, the provider can’t use white-boxtechniques, and nonfunctional testing doesn’treflect the consumer infrastructureand network configuration or load.

Service integratorThe service integrator tests to gain

confidence that any service to bebound to her own composition fits thefunctional and nonfunctional assump-tions made at design time. Runtimebinding can make this more challeng-ing because the bound service is oneof many possible or even unknownservices. Furthermore, the integrator has no con-trol over the service in use, which is subject tochanges during its lifetime.Testing from this per-spective requires service invocations and resultsin costs for the integrator and wasted resourcesfor the provider.

Third-party certifierThe service integrator can use a third-party cer-

tifier to assess a service’s fault-proneness. Froma provider perspective, this reduces the numberof stakeholders—and thus resources—involvedin testing activities. However, the certifier does-n’t test a service within any specific composition(as the integrator does) or from the same net-work configuration as the service integrator.Thisraises serious issues about the guaranteed confi-dence level.

UserThe user has no clue about service testing. His only con-

cern is that the application he’s using works while he’s usingit.For the user,SOA’s dynamicity represents both a poten-tial advantage—for example, better performance, addi-tional features, or reduced costs—and a potential threat,such as unpredicted response time and availability.Makinga service-centric system capable of self-retesting certainlyhelps reduce such a threat. Once again, however, testingfrom this perspective incurs costs and wastes resources.

Imagine if a service-centric application installed on a PDAand connected to the network through a General PacketRadio Service were to suddenly self-test by invoking sev-eral services.

TESTING LEVELSTesting comprises activities that validate a system’s

aspects. New challenges arise at each SOA testing level.(See the “SOA Testing Technologies” sidebar for a sam-pling of commercial and open source testing tools.)

The following list of products, by no means comprehensive,provides an initial roadmap to SOA testing technology. Theproducts cover unit, integration, and system testing and includeboth commercial and open source tools.

➤ ANTS Load (http://www.red-gate.com/products/ANTS_Load) supports testing Web services behavior and per-formance under the stress of a multiple user load.

➤ e-TEST (http://www.empirix.com) suite for Web servicesprovides ways to generate Web services test scripts, validateXML responses, and identify performance bottlenecks by

server-side monitoring.➤ JBlitz (http://www.clanproductions.com/ jblitz)

carries out stress, performance, and function-ality testing by generating different loadinglevels and records anomalies as they occur.

➤ SOAPscope (http://www.mindreef.com) sup-ports testing SOAP transactions by monitor-ing communications among SOAP endpoints and analyzing Web Services Description Language (WSDL) and SOAP messages against industry standards, such as Web Services-Interoperability.

➤ SOA Test (http://www.parasoft.com) supports WSDL vali-dation and functionality, performance, and scalability test-ing. It features a collaborative workflow in which engineerscreate test cases that the quality assurance team can lever-age into scenario-based testing.

➤ TestMaker (http://www.pushtotest.com) is a free open-source framework for building automated test agents thatcheck Web services for scalability, performance, and func-tionality.The commercial TestNetwork platform allows testagents to run on multiple test nodes concurrently.

➤ WebServiceTester (http://www.optimyz.com) is an inte-grated testing suite for functionality, performance, andregression testing Web services. It comprises automated testdata generation and testing-service orchestrations based onBusiness Process Execution Language for Web Services.

SOA Testing Technologies

Page 3: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

12 IT Pro March ❘ April 2006

S E R V I C E - O R I E N T E D A R C H I T E C T U R E

Service functional testingGenerally speaking, providers and

integrators can perform service-functional testing using techniques com-mon in component or subsystem testing.To enable service discovery, providerspublish services with more or less thor-ough specifications.At minimum, servicespecifications include a Web ServicesDescription Language interface with anXML schema that specifies I/O types.These I/O types define value domainsthat providers or integrators can use togenerate test cases according to a func-tional, black-box strategy.

In this context, mutation strategiesassume an important role. Mutationstrategies change, or mutate, inputs.Applying these changes to input mes-sages, we can check whether these muta-tions produce observable effects in theservice outputs, as in the SOAP examplein Figure 1.

If a service has a semantically rich spec-ification, we can apply more sophisti-cated test-generation strategies. Forexample, we can use preconditions togenerate inputs and postconditions tocreate test oracles—that is, sources ofexpected results for the test cases.

Service nonfunctional testingWhen acquiring a service, the user or

service integrator, and the provider agreeto an SLA, which stipulates that theprovider ensure a specific QoS level to theuser.The SLA often results from negotia-tions over what the provider can offer andwhat the user needs and can afford.

A service that doesn’t respond with theexpected QoS can be considered in vio-lation of the SLA. External factors, suchas heavy network or server load, canaffect a service’s performance. In somecases, however, an SLA violation canresult from particular inputs that neitherparty considered when stipulating theagreement, as in the image-processingexample in Figure 2.

We must therefore stress test SLAs. Apossible strategy is to use search-basedtechniques (such as genetic algorithms) togenerate test cases that are likely to violatethe SLA.In the past,stress testing for real-time systems have used similar approaches.

Userdatabase

Webservice B

Modify

Analyze

Webservice A

Request

Response

SELECT usernameFROM adminuserWHERE username = 'turing' OR '1' = '1'AND password = 'enigma' OR '1' = '1'

<adminLogin> <arg0 xsi:type = "xsd:string"> turing </arg0> <arg1xsi:type = "xsd:string"> enigma </arg1>(/adminLogin>

<adminLogin> <arg0 xsi:type = "xsd:string"> turing ' OR '1' = '1' </arg0> <arg1xsi:type = "xsd:string"> enigma ' OR '1' = '1' </arg1>(/adminLogin>

<accessGranted> true</accessGranted>

Figure 1. Mutation strategy can help generate test cases

First, mutate the request message—for example, a SOAP message for Webservices—then analyze the service response to detect faults. In this example,the SOAP message contains an XML-encoded username and password. Webservice B queries the user database: If the adminuser table contains at leastone row with such username and password, access is granted. If the muta-tions add the test ‘ OR ‘1’ = ‘1’ to both username and password, the querywill always return a nonempty recordset, thus granting access. For moredetails, see the article by Offutt and Xu in the “Further Reading” sidebar.

➤ Framework to support service testing: “Coyote: an XML-basedFramework for Web Services Testing,” W.T. Tsai and colleagues,Proc. 7th IEEE Int’l Symp. High Assurance Systems Eng. (HASE02), IEEE CS Press, 2002, pp. 173-176.

➤ Service monitoring: “Smart Monitors for Composed Services,” L.Baresi, C. Ghezzi, and S. Guinea, Proc. 2nd Int’l Conf. Service-Oriented Computing (ICSOC 04), ACM Press, 2004, pp. 193-202.

➤ Service regression testing: “Using Test Cases as Contract to EnsureService Compliance across Releases,” M. Bruno and colleagues,Proc. 3rd Int’l Conf. Service Oriented Computing (ICSOC 2005),LNCS 3826, Springer, 2005, pp. 87-100.

➤ SOAP message mutation: “Generating Test Cases for Web ServicesUsing Data Perturbation,” J. Offutt and W. Xu, ACM SIGSOFTSoftware Engineering Notes, vol. 29, no. 5, 2004, pp. 1-10.

➤ Use of graph-transformation techniques to generate test casesfor Web services: “Automatic Conformance Testing of WebServices,” R. Heckel and L. Mariani, Proc. FundamentalApproaches to Software Eng. (FASE 05), LNCS 3442, Springer,2005, pp. 34-48.

Further Reading

Page 4: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

March ❘ April 2006 IT Pro 13

In this context, the QoS measured when executing the serv-ice with the generated inputs guides the search.

Integration testingSOA’s peculiar characteristics raise serious integration

testing issues. Dynamic binding lets us define service-cen-tric systems as a workflow of invocations to abstract inter-faces that are bound to concrete services either just beforeworkflow execution or even during enactment.Thus, clas-sical integration and testing methods fail when they requirethe precise identification of systems components and theirinterrelationships.

Because of dynamic binding, we must test a compositeservice partner link for all possible endpoints, as in Figure3.The problem is similar to testing an object-oriented sys-tem because of polymorphism. However, with SOAs theproblem is worse. Testing against all possible endpoints

could be costly, and endpoints might be unknown at test-ing time.

We need heuristics that reduce the number of possibleendpoints for testing. For example, if our binding strategywon’t consider endpoints leading to a violation of a QoSconstraint, it might not be useful to perform integrationtesting against them.

Nonfunctional testing also becomes more complex forservice compositions that allow dynamic binding. Theresulting QoS depends not only on the inputs but also onthe combination of actual bindings.Test case generation ismore complex and expensive for generating a combina-tion of bindings and inputs that can cause SLA violations.

More complex scenarios uncover further integrationtesting needs. If a service is unavailable at runtime or if itmust be replaced—for example, if it can’t guarantee agiven QoS—the replacement can be a newly discovered

GrayB

GrayA

GrayC

PosterizeB

PosterizeA

PosterizeC

Cost = $5Response time = 10 ms

Cost = $10Response time = 5 ms

Cost = $7Response time = 7 ms

Posterize

ScaleB

ScaleA

ScaleC

Cost = $0.25Response time = 10 ms

Cost = $0.50Response time = 5 ms

Cost = $1Response time = 2 ms

Gray

Scale

SharpenSharpenB

SharpenA

SharpenC

Cost = $5Response time = 5 ms

Cost = $2Response time = 8 ms

Cost = $1Response time = 10 ms

dim1 = dim2 dim1 ≠ dim2

posterize = true posterize = false

Iteratensharpen

times

Service-level agreement constraint: cost ≤ $35Input: img1.bmp, posterize = true, nsharpen = 5

Total cost = $36 > $35

Figure 2. Testing quality of service through service compositions.

Some combinations of inputs and bindings can cause service-levelagreement (SLA) violations. Take, for example, an image processingcomposite service that can produce gray-scaled images or posters. Toobtain a poster, you scale the image (available only for squareimages), apply the posterize filter, and sharpen the image outline. Youcan repeat the sharpen step to make edges more visible. Suppose wehave an SLA cost constraint of $35. Generating combinations of inputsand bindings for a given strategy, we discover that making a posterfor a scalable image with at least five applications of the sharpeningfilter violates our cost constraint.

Page 5: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

14 IT Pro March ❘ April 2006

S E R V I C E - O R I E N T E D A R C H I T E C T U R E

service or even a composition of services when no singleservice can fulfill the task. Despite the complex automaticdiscovery-and-composition mechanisms available, theintegrator must adequately test the service or composi-tion before using it. In this case, test time must be mini-mized because it affects runtime performance.

The Web Services-Interoperability (WS-I, http://www.ws-i.org) organization is working to solve such issues aswell as broader interoperability concerns. See the “WebServices-Interoperability” sidebar for more information.

Regression testingThe provider controls the evolution strategy of the soft-

ware behind a service, thereby further complicating test-ing. Regression testing—retesting a piece of software aftera round of changes to ensure the changes don’t adverselyaffect the delivered service—requires that service usersand systems integrators know the service release strategy.However, the service provider might not be aware of who’s

using the service, so the provider can’t notify those usersabout changes to the service’s interface or implementa-tion.

Integrators’ lack of control over services truly differen-tiates SOAs from component-based systems and frommost distributed systems. In those systems, one organiza-tion is often responsible for the distributed components ituses. During a service-centric system’s lifetime, servicescan change their functionality or QoS, without necessarilyexposing those changes through their interfaces. Thisaffects the application’s behavior and, from legal point ofview, could be considered an SLA violation.

For example, imagine a service integrator using a serv-ice that computes a person’s social security number (SSN).If the way the SSN is computed changed, the system’sbehavior would also change.The results would be similarif, for example, a hotel booking service stopped acceptingbookings without a credit card.

Any service integrated into a composition requiresregression testing to ensure its compliance with the inte-

PremiereHotelSearch

hotelSearchbyAddr

getAddress

Cost = $20Response time = 5 ms

DeluxeHotelSearch

BuzzAirlines

FastFlyCost = $9

Response time = 10 ms

Cost = $20Response time = 8 ms

Cost = $10Response time = 15 ms

hotelSearch checkFlight

JrShuttleServiceCost = $10

Response time = 10 ms

FastShuttleServiceBarCab

FooCabCost = $7

Response time = 15 ms Cost = $12Response time = 7 ms

Cost = $9Response time = 10 ms

JCabCost = $5Response time = 20 ms

getShuttleTicketPrice getCabPrice

Servicereplacement

Monitoring

Testing the replacement composition

Figure 3. Dynamic binding can exponentially increase the cost of service integration testing.

In this workflow, each abstract interface (hotelSearch, checkFlight, getShuttleTicketPrice, getCabPrice) can be bound toseveral possible concrete services. Each call site requires testing all possible concretizations, assuming they’re known.Stronger testing criteria might require testing the workflow with all possible combinations of bindings (24 in this case).However, constraining bindings can reduce this number—for example, it’s reduced to six if the cost must be within $15.Service replanning is also an issue. When a new composition replaces part of the workflow (because, for example, a serv-ice isn’t available), the composition requires testing to ensure it delivers the same functionality. We can exploit monitoredI/O from the replaced service for this purpose.

Page 6: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

March ❘ April 2006 IT Pro 15

grator’s assumptions and with theprovider-integrator contract. Such a testcan be triggered by periodic invocations orby notification of a service update, if theapplication server where the service isdeployed supports service versioning andadvertises it through the interface. Toaccomplish this, the provider can publishtest suites as a part of the service specifi-cation, as Figure 4 illustrates. The integra-tor can complement them with additionaltest suites and a capture-replay strategy,monitoring and replaying previous serviceexecutions.

TESTING COSTS AND THE ROLE OF MONITORING

Regardless of the test method, testing aservice-centric system requires the invo-cation of actual services on the provider’smachine.This has several drawbacks.Fromthe system integrator viewpoint, costs canbe prohibitive if they must pay for serviceson a per-use basis. At the same time, mas-sive testing could cause adenial-of-service phenomenafor service providers. Youmight not be able to repeatedlyinvoke a service for testingwhen the service invocationaffects the real world, as in thecase of flight booking or pay-ment services.

In most cases, service testingimplies several service invoca-tions, leading to unacceptablyhigh costs and bandwidth use. Several cost-cutting ap-proaches aim to reduce a testsuite’s size, but this might benot enough.

Service monitoring can playan important role in cuttingtesting costs. Monitoring is use-ful for such tasks as

• preventing failures—for ex-ample, by replacing a soon tobe unavailable service withanother equivalent;

• verifying that a service invo-cation meets given pre- andpostconditions; and

• triggering recovery actionswhen needed.

Web Services-Interoperability (WS-I, http://www.ws-i.org) is anopen industrial organization that promotes Web services interoper-ability across platforms, operating systems, and programming lan-guages.WS-I helps define protocols for the interoperable exchange ofmessages between Web services. More specifically, WS-I delivers

➤ profiles that provide implementation guidelines on using relatedWeb services specifications together for optimal interoperability,

➤ sample applications that demonstrate Web services applicationscomplying with WS-I guidelines, and

➤ testing tools that help determine whether the messages exchangedwith a Web service conform to WS-I guidelines.

Two notable WS-I testing tools are the monitor and the analyzer.The monitor provides an unobtrusive way to log Web service mes-sages using a man-in-the-middle approach. The analyzer determinesif a set of Web-service-related artifacts—messages, service descrip-tions, and UDDI universal description, discovery, and integration)entries—conform to the requirements in the WS-1 Basic Profile 1.0.

Web Services-Interoperability

Testcases

Regressiontesting tool Monitoring

ServiceTestcases

Application server

5a Uses service

Jane (serviceintegrator)

6b Tests

service

1 Deploys serviceand test cases

4 Updates service

Jim (service provider)

6b Uses monitoringdata to reduce

testing cost

5b Monitoredservice I/O

6a Triggersregression testing

Alice (serviceintegrator)

2 Acquires serviceand downloads

test cases 3 Usesservice

Testcases

6c Outputstest results

Figure 4. Service regression testing.

(1) Jim (a service provider) deploys a service with a test suite. (2) Alice (a service inte-grator) acquires the service and test suite, which she can complement with her owntest cases. (3) She then regularly uses the service. After a while, (4) Jim updates theservice, and (5a) Jane (another service integrator) regularly uses the new service withno problems. Meanwhile, (5b) a monitoring system monitors Jane’s interaction. (6a)Alice tests the service. (6b) She can use data monitored from Jane’s executions toreduce the number of service invocations during testing. For more details, see thearticle by Bruno and colleagues in the “Further Reading” sidebar.

Page 7: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

16 IT Pro March ❘ April 2006

Monitoring can also help testing in various ways.We canuse monitoring I/Os in a capture-replay scenario.Additionally, we can use monitoring to mimic servicebehavior during testing activities. This use reduces thenumber of invocations needed, as in Figure 4.

In a closed corporate environment, testing tools candirectly access monitoring data. In an open environment,

such information can’t always be disclosed. However, theprovider can publish services with a testing interface.Integrators execute test cases against this interface, whichactually invokes the services only when necessary. In allthe other cases, it forwards the innovation to a service sim-ulator (or stub) that uses monitoring data to mimic theservice behavior.

S E R V I C E - O R I E N T E D A R C H I T E C T U R E

Testing Testing perspectiveslevels Developer Provider Integrator Third-party User

Service functional White-box Needs service Needs service Assesses only selected Service-centric testing testing available specification to specification services and functionality application self-testing

Service specification generate test cases Needs test suite on behalf of someone else to check that it ensuresavailable to generate Limited cost deployed by service (should be impartial functionality during test cases Black-box testing provider assessment of all services) runtimeNonrepresentative only Black-box testing Small resource use for Services have noinputs Nonrepresentative only provider (one certifier interface to allow user

inputs High cost tests the service instead testingof many ntegrators)Only nonrepresentative inputsHigh cost

Service Necessary to provide Necessary to check High cost Assesses performance on Service-centric nonfunctional nonfunctional own ability to meet Might depend on behalf of someone else application self-testing testing specifications to SLA stipulated with network Small resource use for to check that it ensures

provider and user configuration provider (one certifier performance during consumers Limited cost Difficult to check tests the service instead runtimeLimited cost Testing environment whether SLA is met of many integrators)Nonrealistic testing might not be Nonrealistic testing environment realistic environment

High cost

Integration testing Can be service NA Service call coupling NA NAintegrator on his increases becauseown of dynamic binding

Must regression test a composition after reconfiguration or rebindingQuality-of-service testing must consider all possible bindings

Regression testing Limited cost Limited cost Might be unaware Retests service during its Service-centric (service can be (service can be that the service has lifetime only on behalf of application self-testing tested off-line) tested off-line) changed integrators, not other to check that it works Unaware of who Can be aware that High cost stakeholders during evolution uses the service the service has High cost

changed but Lower-bandwidth useunaware of how it than having manychanged integrators

Nonrealistic regression test suite

Table 1. Highlights per testing dimension. Needs and responsibilities of each stakeholder are in black, advantages in green,

issues and problems in red.

Page 8: Testing Services and Service-Centric Systems: …Services and Service-Centric Systems: Challenges and Opportunities Gerardo Canfora and Massimiliano Di Penta W ith service-oriented

March ❘ April 2006 IT Pro 17

SOAs promise loosely coupled and dynamic connec-tions that let applications take advantage of ever-expanding service capabilities, or even services

unknown and unforeseen at design time.This is, of course,appealing for architects called to design systems in today’scompetitive, highly uncertain business landscape.

Table 1 summarizes SOA testing’s main highlights fordifferent stakeholders and testing levels. Central to all ofthese issues is that SOAs, and in particular Web services,aim to build systems comprising services that were devel-oped by providers outside the enterprise. Thus, testing isboundless. Of course, users can always test services andtheir compositions independently when using the finalservice-centric system.However, testing would happen toolate in the development cycle, thus making it difficult toidentify a known error’s source.

Devising testing strategies and methods for SOAs is stilla young research area. The foreseeable diffusion of thisarchitectural style depends on developing a trustablemeans of functional and nonfunctional testing of servicesand their compositions. ■

Gerardo Canfora is a full professor of computer science inthe Faculty of Engineering and the director of the ResearchCentre on Software Technology (RCOST) of the Univer-sity of Sannio in Benevento, Italy. Contact him at [email protected].

Massimiliano Di Penta is assistant professor at the Uni-versity of Sannio in Benevento, Italy and lead researcher atthe Research Centre On Software Technology (RCOST).Contact him at [email protected].

AcknowledgmentsThis work is framed in the European Commission VI

Framework IP Project SeCSE (Service-centric SystemEngineering; http://secse.eng.it), contract no. 511680.

For further information on this or any other computingtopic, visit our Digital Library at http://www.computer.org/publications/dlib.

By Steven L. TanimotoUniversity of Washington

Python, an increasingly popular general-purpose programming language, offers a variety of features that make it especially well-suited for artificial intelli- gence applications. This ReadyNote will help professional programmers pick up new skills in AI prototyping and will introduce students to Python's AI capabilities. $19 www.computer.org/ReadyNotes

Here Now! Introduction to Python for Artificial Intelligence

IEEE ReadyNotes