iTKO WP Service Virtualization Dev Test Cloud

16
iTKO Technology Insights Whitepaper Series Service Virtualization and the DevTest Cloud By John Michelsen, Chief Scientist & Founder, iTKO February 2011 Realize faster cycle times and cost efficiencies through Cloud-based development and test environments with Service Virtualization

Transcript of iTKO WP Service Virtualization Dev Test Cloud

Page 1: iTKO WP Service Virtualization Dev Test Cloud

1

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Service Virtualizationand the DevTest Cloud

By John Michelsen, Chief Scientist & Founder, iTKO

February 2011

Realize faster cycle times and cost efficienciesthrough Cloud-based development and testenvironments with Service Virtualization

Page 2: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 2

iTKO Technology Insights Whitepaper Series

3

4

4

5

7

7

8

9

10

11

12

12

13

13

14

15

15

16

Contents

Executive Summary

The Case for the DevTest Cloud

Volatility in Development & Test Labs

Steps for Standing Up the DevTest Cloud

The DevTest Cloud’s Unique Need: Service Virtualization

Constrained Utilization of Cloud Systems

The Missing Piece: Service Virtualization

Why “Stubs” Aren’t Good Enough

Service Virtualization with LISA

Customer Example of SV for DevTest: An international bank

Benefits of Service Virtualization for Cloud

Solution: DevTest Cloud Manager (DCM)

Customer Example: Rapidly Ballooning to New Markets, but Weighed Down

Insulate Yourself from Cloud Market Shakeout

Best Practice: Rolling out a DevTest Cloud Platform

Conclusion

About the Author

About iTKO

Page 3: iTKO WP Service Virtualization Dev Test Cloud

3

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Executive Summary

Software development practices have lost their sense of equilibrium, and the Cloud can help bring it back. The forces of increased complexity, interconnectivity, geographic distribution, and pace of change are invalidating some of the fundamental assumptions we have relied on in software development process. In turn, this has created an imbalance in the economy in which software development operates.

Development and test processes that rely heavily on repeated execution of steps via manual labor cannot meet the organizational goals of reducing cycle time and costs. We have found that for most development organizations, their greatest constraints to meeting their objectives are outside their own control. Those constraints are the lack of availability, capacity, and stability of required development and test environments throughout the software lifecycle.

While most speak of the cloud for use in production, we find even greater cause for the use of a cloud platform for pre-production dev and test labs, or a “DevTest” Cloud. This paper makes the case for the DevTest Cloud, and describes solutions and best practices for proper utilization of cloud infrastructure in the software dev and test economy.

The DevTest Cloud can bring equilibrium to this issue of constrained development. It provides for on-demand setup and teardown of labs from a virtualized infrastructure with the appearance of infinite capacity. The technologies required for a DevTest Cloud include:

• A hypervisor to host machine images (PaaS) • A provisioning facility to manage and orchestrate the environment (IaaS)•Service Virtualization (PaaS) technology to solve for a number of issues specific to the

DevTest Cloud related to off-cloud, unavailable, costly, or highly data-volatile systems that teams depend upon during development

For the DevTest Cloud to fulfill its business justification it must be completely self-contained in the cloud. As soon as dependencies exist “off-cloud” the rapid provisioning and capacity benefits halt. Virtualization of both the in-scope system resources as Virtual Machines (VMs), and the capture and simulation of off-cloud or unavailable resources and capacity as Virtual Services (VSes), allows developers, testers and performance engineering teams to work in parallel at a fraction of the expected infrastructure cost.

Page 4: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 4

iTKO Technology Insights Whitepaper Series

Current State: Volatile Utilization of Lab Resources

The Case for the DevTest CloudOne only needs to think about two important dynamics in the development phase of software to realize that Cloud is compelling. Think about the number of development labs associated with each production lab where an application runs. For every production infrastructure, there are three, five, or even more pre-production labs that deliver applications into that production infrastructure. Every one of these labs has an even more volatile capacity demand function.

It is important to note that the provisioning volatility is much higher in pre-production development labs than in production infrastructure. These systems constantly require new configurations, setups and teardowns to make way for various teams’ activities. This equates to both high costs and a wild sprawl of environments.

Therefore, Cloud’s value proposition is arguably even more relevant in development and testing (DevTest) than in production.

Volatility in Development & Test Labs

Cloud is best used when the volatility of demand varies among a variety of uses of a particular infrastructure. Different applications have different capacity needs over time. The ability to leverage one common resource pool among many teams gives us an appearance of higher capacity on a per-team basis when, in fact, we are simply leveraging the unused capacity of other teams.

In the utilization graph shown here, many teams are leveraging shared infrastructure. One team might peak its usage during performance tuning or a “big bang” release cycle. Other teams are simply doing typical dev and test activities, and they are generating no such peak. The ability for team A to leverage the shared infrastructure gives them the capacity they need during a time when other teams don’t need that additional capacity. In general, the environment will see a steady, lower threshold of capacity requirements, and of course a variety of peak times given various activities as shown in the graph.

When an enterprise’s IT management team understands their capacity in this regard, they have a greater ability to make sound economic decisions about how to leverage cloud-based infrastructure. There is an economic equilibrium of demand that dictates what infrastructure should

Challenges of Pre-Production Environments

Page 5: iTKO WP Service Virtualization Dev Test Cloud

5

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Utilization of Lab Resources wirth Cloud

Using Cloud for Pre-Production

be provisioned as on-site at the lowest possible cost, and what additional infrastructure could be provisioned either on-site or off-site in public cloud infrastructure.

Some customers will have significant issues with leveraging public infrastructure, but in time, we see those issues giving way to solutions that can provide benefits associated with leveraging public clouds for peak capacity needs.

Steps for Standing Up the DevTest Cloud

The general process of leveraging cloud infrastructure for pre-production usage involves the following three steps, illustrated here:

• First, pool the pre-production resources of several teams that will be leveraging the infrastructure together. This means establishing a single environment from the resources of several presently available environments.

• Second, implement a virtual lab management (or VLM) provisioning solution or IaaS (Infrastructure as a Service) that allows teams to provision their needs for computing resources dynamically. To do this, start migrating physical computing resources into virtual assets and store them in a catalog. That catalog would therefore consist of virtual machine images of each of the systems that the various teams may need at a given time. Think of this catalog as the ingredients that are needed by teams. The virtual lab management solution is responsible for giving administrators the ability to leverage those ingredients in a recipe. We will recommend later that you even leverage desktop virtualization to include the dev/test workstation images. What about the myriad of systems that are not available to be images in the catalog? That’s a discussion for this paper’s next section.

• Third, change the practice of provisioning dev and test environments. As additional teams are brought online, it is quite possible that additional physical hardware needs are not required. Instead, those teams will leverage the existing physical infrastructure, but provide additional virtual machine images in the catalog.

In this next utilization graph with Cloud introduced, we can see that the expected “equilibrium of capacity” the company is attempting to budget for is shown here as the dashed line. Industry data suggests that highly utilized, on-premise computing

Page 6: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 6

iTKO Technology Insights Whitepaper Series

Current Dev & Test Environment with VMs

resources are the least expensive available. That same research suggests that under-utilized on-premise resources are the most expensive and should be avoided.

Therefore what we want to do is provide for our constant “equilibrium” of capacity needs with on-premise resources, and enhance our peak capacity needs with external cloud resources (either public cloud or other shared cloud infrastructure). As additional teams are brought online, the net effect might be to increase the need for on-premise capacity. But, as many companies have plenty of under-utilized capacity, it is almost equally possible that there will be a negligible increase in capacity required and we will need no additional computing resources. Through this incremental approach, we have made no additional capital (or CapEx) investment in infrastructure, but have provided what appears to be an entirely new infrastructure to that team as a smaller, more incremental operating (OpEx) expense.

To bring it all together, consider this diagram. As teams adopt cloud infrastructure, they form a catalog of virtual machines that must be provisioned dynamically. That catalog is the sum of all the teams’ needs. An individual team might have access to some, but not all of the available computing images. They may also have quotas on their storage and capacity utilization.

The act of provisioning environments amounts to making a request to stage a collection of images onto the pooled virtual server resources. The VM images will be catalogued in a particular collection, or a lab. The ability to work with a whole collection of machines as one lab is critical to provisioning efficiency. Teams should not be required to manage the various images directly, as this would misalign skills in the organization.

When a development or test team needs access to an application, it is likely that several images might need to be provisioned. That collection is typically called a lab. That lab is shown in the virtual lab management solution as a self-contained unit that can be provisioned, decommissioned, and secured altogether. What used to take days or in some cases weeks now can take just minutes. And even before the set up of the environment, the acquisition and installation of hardware and base software time has been reduced from weeks or months, to in most cases no time at all.

So, we have spent some time making the case for using the cloud for development and test labs. We’ve made the argument that in some ways, it is even more compelling than the case to leverage cloud in pre-production than it is in production. We would of course like to see cloud leveraged everywhere possible. But if you are going to embark on a cloud strategy, I would argue that the lowest risk and most rewarding place to start is in the pre-production dev and test labs.

photo maybe?

Page 7: iTKO WP Service Virtualization Dev Test Cloud

7

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Volatile Utilization of Public Cloud

Cloud Lab Constraints

The DevTest Cloud’s Unique Need: Service Virtualization

As excited as we are about leveraging Cloud for dev and test, there is a critical piece of platform technology that is required to make these labs effective. While it is not appropriate for production use, it is critical to make the value proposition of pre-production clouds possible. It is this capability that differentiates a cloud infrastructure as suitable for DevTest use.

Let’s consider the following: my team needs three virtual machines (VMs) to be provisioned in a lab for my development or test activities. Two of those VMs make use of resources that are off-cloud. My order management system requires access to a mainframe. My application server makes calls to a third-party application, and also to a terabyte-sized database.

These systems will not be provisioned in my cloud. It is simply not possible to image the disk of the mainframe as a VM, and stick it in a cloud. (An ironic side comment, however, is that mainframes actually do have virtual machines. The issue here is that they are simply not provisioned the way that we think of cloud infrastructure today.)

So, do these wires hanging out of my cloud provide me with issues? They absolutely do. One of the biggest benefits associated with going to the cloud is the ability to provide the appearance of unlimited capacity. The ability to meet the needs of our constantly changing demand within the elastic computing facility is wonderful, but if our on-cloud resources make use of off-cloud resources, we are still constrained by the capacity of those off-cloud resources.

Constrained Utilization of Cloud Systems

When we revisit our graph on peak capacity needs, it would be more likely to exhibit the “blackouts” in public capacity shown here. The reality is that some of our need for additional capacity to support peak usage is going to be provided external to the cloud, but those systems will simply not be able to provide such capacity. For example, an attempt was made to leverage additional capacity based on the need to support a performance test, and yet because of either cost issues in leveraging third-party applications, or capacity and access issues of off-cloud resources, or network bandwidth issues, or security issues in accessing the data center, teams were unable to perform the desired task.

Page 8: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 8

iTKO Technology Insights Whitepaper Series

Addressing Bottlenecks in Development & Testing

This is in fact a very common problem. Unless your entire application development architecture can beprovisionedinthecloud,youwillnotbeabletomeetyourelasticityandprovisioningefficiencygoals.

Let’s explain the math behind this dynamic. In the graphic shown here, we have established a higher throughput of capacity in the front-end of architecture by leveraging cloud. We have taken what was a capacity of two units, up to 20 units each by going Cloud at the web server and XML gateway. But, unless we solve for the entire architecture’s equation, we are still constrained by our lowest throughput component. Because the downstream capacity is still only two units, our overall capacity is still only two units.

The point here? If the most constrained system is off-cloud, additional images loaded on cloud infrastructure will not create additional capacity for your teams to deliver needed functionality.

In the field, we rarely hear teams tell us that they need more Intel boxes. Virtualization is already in play with our customers, mostly because machines are underutilized, and we need a way to get them more utilized. Leveraging infrastructure and virtualization techniques already allowed us to increase the utilization of underutilized hardware (Web servers, app servers and middleware).

Thanks to virtualization, we now realize that those backend systems and third-party applications are the highest constraint. If we provision additional capacity in the front-end architecture, but don’t solve for the back end capacity issues, we have not actually increased our capacity at all.

In summary, we like the economy of cloud, and we want the provisioning efficiency and elastic demand the Cloud promises. But without a solution to this problem of off-cloud resource constraints, we will not achieve either of these benefits.

The Missing Piece: Service VirtualizationThe solution to the dilemma of off-cloud resources is simply to not have them. Every team needing to provision a lab for their activities has some systems that are considered in-scope, and others that are out-of-scope. An in-scope system is one in which they are performing a development change or test directly on that system. An out-of-scope system is one that is required in support of that in-scope system, but is in fact not the subject of the development or testing activity. It is considered a dependency. It is necessary, but it is not the subject of the development or test activity.

Page 9: iTKO WP Service Virtualization Dev Test Cloud

9

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Why “Stubs” Aren’t Good Enough

The most obvious solution is the usual practice of “stubbing” or mocking those out-of-scope systems by attempting to write code and import data to represent the expected responses of those out-of-scope dependencies. Here’s why that usually does not work.

We need to industrialize or productize the practice of stubbing itself. Some 20 years ago, when I first started development, I was basically on an island. I wrote my own database, wrote my own development tools, my own middleware to get components to talk to each other – in fact, I used to code all kinds of things that today, I would never even dream of building anymore.

The requirements for proper databases and middleware became advanced enough that it was no longer feasible for developers to roll their own. Instead, those requirements became a category of software, and vendors started delivering solutions for that space. Thus we saw the rise of a database market, a middleware market, and so on. We now see that the requirements for simulating components have become too steep to roll your own. So in that light, Service Virtualization is essentially productizing the requirements of stubbing and mocking.

Furthermore, the Consumers of a stub cannot actually perform their downstream dev and test tasks with them. Stubs are inherently brittle, and developers take the most expedient route in simulating the basic functionality or response they expect. Developers think of a stub as “something I can mock up quickly”.

If the stub is not very intelligent, the best you will be able to do with it is to prove connectivity of your application to the stub. For the simplest development use case this sometimes is enough. But for most consumers of the stub, there is so much more intelligence needed, that calling it “something you can mock up quickly” means you don’t understand the true nature of the problem.

Let me explain: If every customer response from a stub has the exact same profile, and the exact same address, account balance, etc. all with hard coded values and dates, then that may help the consumer of that stub with ONE data scenario, but what about customers with high account balances? What about old invoices? What about transactions that occurred yesterday – and will there be transactions “yesterday” occurring next week?

The problem with a stub that has static data is that it won’t support the real variety of scenarios that are needed for a real world application. And you can’t just randomize the data – that would be even worse! Consumers need predictability and control over the data they’re seeing. They need to integrate that data not only from the stubbed application, but from all the other services that might be associated with that particular application they are building.

What we need is a way to simulate the behavior of those out-of-scope systems in such a way that the in-scope system believes it is talking to the live system, but is in fact not. This solution is essentially an advanced productizing of the stubbing or mocking effort that developers have done for years. We call the asset produced a Virtual Service (or VS). A Virtual Service can provide a mechanism to bring all of the systems needed into the cloud. When you provision in-scope systems with virtual machines, and also provision out-of-scope systems as virtual services, you gain the ability to bring the entire lab into the cloud.

Page 10: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 10

iTKO Technology Insights Whitepaper Series

Cloud Lab Constraints

DevTest Cloud — Bottleneck Gone!

Service Virtualization with LISA

Capturing a virtual service typically entails recording live traffic between an in-scope system and its immediate dependencies. That traffic of requests and responses becomes the input into software that makes the virtual service. But as shown above, the VS can also be assembled from service definitions, data from many sources, as well as visually modeled by subject matter experts to respond as needed for scenarios. This VS then stands in place of the live system dependency so realistically that the application can’t tell the difference, nor can your users.

The great benefit of a virtual service is that you have complete control over the VS that represents a system that you had very little control of in the real world. In this virtual world, you control the behavior, data scenarios, and performance profile of the system that is your dependency.

The result? With virtual services to represent the off-cloud resources, we now have a truly elastic, on-demand computing platform, without the bottleneck of constrained or unavailable services that could not have been replicated as VMs or successfully stubbed. Live VMs are still provisioned for in-scope systems, while virtual services are presented for out-of-scope systems that would otherwise have required off-cloud connectivity. By dispensing with the requirement for off-cloud connectivity, we can reach both the goals of elastic capacity consumption and provisioning efficiency.

Page 11: iTKO WP Service Virtualization Dev Test Cloud

11

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Constraint-Free DevTest Cloud with LISA

So, taken together, by dynamically provisioning from catalogs of LISA’s Virtual Services alongside imaged Virtual Machines, we can realize this view of a constraint-free DevTest Cloud that stands up to provide the appearance of infinite capacity and provisioning efficiency.

Customer Example of SV for DevTest: An international bank

Recently I visited the architecture team of one of the world’s largest banks, when I heard a stunning statistic. This gentleman said their hardware asset management system claims there are more servers deployed in the bank than there are employees in the bank.

He then explained to me that the typical project-based budgeting process created the perfectly fertile ground for growing huge server farms. Every project team would justify the expense of its own development, testing, pre-production, and production hardware expenditures. Most every application currently in use at the bank has at least these four environments behind it, even though the maintenance or changes on those applications in many cases go months or years between releases.

This bank is going to get tremendous value from cloud infrastructure, especially from LISA’s virtual service technology in a DevTest platform. Hundreds of pre-production labs are folding into one vastly simpler to manage infrastructure, with software based provisioning on an on-demand basis for any

of the required environments. Projects not currently under change will no longer consume power, generate heat, or consume floor space.

Ironically, the greatest challenge most existing pre-prod environments have is that they are never the complete system. A given project purchases some number of servers, which then interact with other project team resources, such as a customer information management system on the mainframe. So even though every team allocated their own hardware budget, they still spend countless months of their development cycle waiting on access, and

inefficiently accessing shared system resources. DevTest platform technology will resolve this issue as well. Instead of needing live system access for the mainframe partition, the team will provision a virtual service of the customer information management system on-demand, in the cloud.

This bank will go from literally tens of thousands of pre-production servers to a few hundred, thereby increasing their efficiency, agility, development productivity, and software quality.

Page 12: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 12

iTKO Technology Insights Whitepaper Series

Continuous Integration with LISA DCM

DevTest Cost Reduction Opportunities

BenefitsofServiceVirtualizationforCloud

Using Virtual Services to complete the Dev & Test Cloud allows for a whole new economy in the development of applications. This creates a dramatic decrease in the cost structure for development and test infrastructure. One effect you can see from the graph below is that the overall infrastructure requirements and costs, even for the on-premise cloud, go down.

When Virtual Services represent out-of-scope systems, they utilize computing resources much more efficiently than a live system does. For example, it might take several virtual machines to represent SAP, whereas one Virtual Service may consume only a fraction of the CPU and memory requirements of just one of those machines for pre-production. This means that the overall computing requirements per lab go down considerably.

In addition, when large releases and performance tests occur, and demand surges, the computing resources needed are dramatically lower. For example, in a typical performance test, the entire architecture must scale to the load desired. In the virtual world of virtual machines and virtual services, only the virtual machines must scale. This means that only a fraction of the entire lab must be scaled up, while the typically larger and more complex systems represented by Virtual Services do not need to scale up at all.

Solution: DevTest Cloud Manager (DCM)iTKO’s LISA helps deliver on the promise of Cloud applications by removing constraints and risks from the software environment, thereby realizing the elasticity and cost efficiencies we expected.

Thus far, we have discussed the benefits of leveraging Cloud for development and test, and introduced a critical platform technology called Service Virtualization for enabling Development and Test clouds, or “DevTest” Clouds, which is a capability of LISA. So now let’s talk about how to leverage cloud infrastructure in ways that were never practical or possible before Cloud came along.

Developers are moving to a continuous integration model, where every build kicks off a series of tests to ensure that the software under construction is getting more and more refined. One of the great

Page 13: iTKO WP Service Virtualization Dev Test Cloud

13

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

Example: In-Scope vs. Out-of Scope

challenges in rigorous continuous integration is that of the need for so many environments to support the many builds and the constant testing going on in those builds. It is clear by now that the cloud is a great resource to solve this problem.

As part of the DevTest Cloud platform, the LISA DevTest Cloud Manager solution provides the ability to provision development and test labs, consisting of Virtual Machines (VMs), Virtual Service Environments (VSEs), and automated regression and performance test servers in the Cloud.

As shown here, we have engineered our testing capability with LISA VSEs to leverage cloud infrastructure seamlessly without dependencies. Users are able to launch an entire lab in the cloud, simply by starting a testing activity. For example, you might be running an Ant build from a continuous integration tool. By adding an Ant task to launch a LISA test suite, we can stage that entire test to run in the cloud, along with the dependencies needed for realistic testing. There is no requirement for off-cloud system availability or new hardware to provide this activity.

Insulate Your Teams from Cloud Market Shakeout LISA DCM provides a mechanism by which LISA products can provision, scale, and shut down cloud infrastructure seamlessly. But its greatest benefit might be the way that it insulates your teams from the actual interface tools and APIs of the cloud infrastructure. Like most technology on-ramps, vendors and customers will predictably try many solutions until they settle on one. Vendor changes will come frequently and significantly as their functionality and market focus improves. Enterprises will likely have expert infrastructure teams trying to implement and learn to tweak the interfaces of several vendor offerings concurrently. All this is a good and healthy practice. App dev teams do not need to be subjected to all this vendor and solution shakeout. Hundreds of users who have better things to do will naturally become inhibitors to the infrastructure team’s need to make continuous improvements in the cloud infrastructure selection and version deployed. LISA DCM provides a single interface for app dev teams that enables all the functions they need of the cloud being offered to them by their infrastructure team, regardless of the underlying vendor technologies: lab listing, startup, monitoring, and shutdown. DCM integrates the various cloud providers and brokers between them seamlessly.

Customer Example: Rapidly Ballooning to New Markets,but Weighed Down

A customer recently related a story that sounds like a common theme I’m hearing in the industry. This customer’s business is experiencing fairly significant growth. They are expanding in new markets and territories in what sounds like a “gold rush” or “land grab” kind of situation. Whoever engages the customer first in these emerging markets is

Page 14: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 14

iTKO Technology Insights Whitepaper Series

likely to have a generation of business advantage over competitors.

Of course, to support additional products, additional distribution, new languages, and new regions of the world in which to perform business processes, big changes and additions to their systems are required. The vice president of development said “I am bringing some really ugly numbers to the board based on the traditional approach we have taken to building our applications.” The development infrastructure expenditure was expected to be in the tens of millions of dollars.

In our experience, on a per project basis, the outlay for an environment is comprised of 20% the cost of the in scope systems (what is actually being developed), while 80% of the cost is in the out of scope or dependent systems (downstream systems of record). For example, a website built specifically for offering an existing product or service to a new market might have two or three Intel-based servers in scope. But the out of scope dependencies in the development lifecycle are the entire corporate ERP system, order and inventory management systems, and customer information management systems. These systems are necessary for the development and testing of this e-commerce site, and therefore drive the infrastructure cost quite high.

When we performed a business value assessment on a cloud infrastructure fully leveraging our virtual service technology, we discovered that the additional capital outlay would be 1/100th of the previously expected amount.

What is clear from this example is that customers who require expansion of their activities or rapid changes to existing activities are going to see a compelling business case for moving to the cloud.

Best Practice: Rolling out a DevTest Cloud PlatformWe are recommending a best practice for customers to consider for the rollout of cloud infrastructure for pre-production.

At present, when a lab is provisioned on physical hardware, only the servers associated with the project are set up. Typically, we expect the development and test teams to have their own workstations with software appropriate for their use in support of their role. This means that the entire lab really consists of the development workstations and the lab servers.

We recommend in contrast that the entire lab’s required resources be catalogued and provisioned from the cloud infrastructure. This includes machine images for the developer’s desktop images. While this creates a need for the developers to use a remote desktop technology to access their actual development workstations, in most cases the network requirements to support efficient access in this manner are already present.

We make this recommendation to provide three primary benefits:

• First, rapid provisioning of the servers in the lab is a wonderful benefit of this cloud infrastructure, but if the development desktop is still statically provisioned and has significant configuration effort associated with it, we lose a lot of our rapid provisioning agility.

Page 15: iTKO WP Service Virtualization Dev Test Cloud

15

iTKO Technology Insights Whitepaper Series

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.comwww.itko.com

• Second, we are seeing more and more organizations move to a project-billing style resource allocation model. In such a model, all the development resources in the organization are considered consultants and are billed to projects as needed. In this model, frequent resets of the development workstations are needed unless one is leveraging the entire environment from the cloud.

• Third, security of the systems and data used in development and testing are enhanced, as it is possible to firewall the entire lab from outside access, except for those authenticated users to access the desktop UI. Even transfer of data outside the lab can be restricted while still providing an effective environment and data access to team members.

ConclusionProperly equipping teams with a cloud-based platform to develop and test for enterprise applications is essential to bringing the software lifecycle back into economic balance and overcome the forces of change and complexity in those applications. The DevTest Cloud described above provides a great start to your efforts.

Refreshingly, moving down this path is not a “rip and replace” of your existing architecture or application lifecycle processes. DevTest Clouds optimize the utilization of your existing infrastructure.

As vendors continue to evolve their solutions to leverage the DevTest Cloud, even more automation and self-managing environments will emerge. In a future paper from us you will see our thinking in this regard for the testing, validation, and monitoring of systems in the DevTest Cloud.

iTKO’s LISA provides a platform for making Cloud work for the software development lifecycle. We invite you to research more about DevTest Clouds at our resource site at http://itko.com/cloud, and contact iTKO to explore how leading companies are leveraging the Cloud to realize greater flexibility with lower cost and risk.

About the AuthorJohn Michelsen, Chief Scientist & Founder, iTKO, Inc.John has more than twenty years of experience as a technical leader at all organization levels, designing, developing, and managing large-scale, object-oriented solutions in traditional and network architectures. He is the chief architect of iTKO’s LISA virtualization and software testing product, and a leading industry advocate for optimizing the lifecycle of enterprise applications.

Before forming iTKO, Michelsen was Director of Development at Trilogy Inc., and VP of Development at AGENCY.COM. He has been titled Chief Technical Architect at companies like Raima, Sabre, and Xerox while performing as a consultant. Through work with clients like Cendant Financial, Microsoft, American Airlines, Union Pacific and Nielsen Market Research, John has deployed solutions using technologies from the mainframe to the handheld device.

Page 16: iTKO WP Service Virtualization Dev Test Cloud

Service Virtualization and the DevTest Cloud | © iTKO, 2011 | www.itko.com 16

iTKO Technology Insights Whitepaper Series

About iTKOiTKO helps customers transform the software development and testing lifecycle for greater quality and agility in an environment of constant change. iTKO’s award winning LISA™ product suite can dramatically lower quality assurance costs, shorten release cycles, reduce risks, and eliminate critical development and testing constraints by virtualizing IT resources to provide accessibility, capacity and security as needed across interdependent teams.

LISA test, validation, and virtualization solutions are optimized for distributed, multi-tier applications that leverage SOA, BPM, cloud computing, integration suites, and ESBs. iTKO customers include industry leaders such as eBay, American Airlines, Citigroup, Time Warner, SwissRe, Wells Fargo and government agencies including the U.S. Department of Defense.

For more information, visit http://www.itko.com or read our blog at http://blog.itko.com.

iTKO LISA1505 LBJ Freeway | Suite 250 | Dallas, TX 75234 USA

www: http://www.itko.comemail: [email protected]: 877-BUY-ITKO (289-4856)

© 2011, Interactive TKO, Inc. All rights reserved.